The Active Network

Security's big brother: RELIABILITY

January 18, 2000 - Chuck Flink

Did you grow up in the shadow of an older sibling? Were you the little brother of a jock? ...a hero? icon? ...the BMOC? If so, you understand the relationship between the security engineer and the reliability engineer. Reliability always comes first. No doubt, it is a prerequisite of security! But how many times do we have to watch the plug pulled on a project before the security objectives are met?

Now it does make cents! Yes, I meant cents not sense! Customers will pay to have a program do what they want it to do, but they don't think about paying to make sure it won't do what they don't want it to do! This is another instance of the old logic problem about the difficulty of proving a negative. So it is easy to justify management decisions that "satisfy" the customer with a half-truth. The bottom line rules the market and always will. What we need is education so customers quit paying for products with EULAs that absolve the vendor of all responsibility!

EULA is an acronym for End User License Agreement. You know, that thing you don't bother to read but say you agree to! It makes you responsible for anything you do that could hurt the vendors' bottom line while absolving the vendor of any responsibility for anything his software does to you. It's that thing that says, "If you're so stupid to have installed this software and actually used it to handle important data, don't blame us if anything goes wrong!" This is how management keeps the corporations in business while selling products that are only half-finished! Of course, it is impossible to prove something is flaw-free, just as it is impossible to develop perfect software. But the industry needs to get serious about promoting software insurance or some similar concept to cover the more serious side of the problem. I'll return over and over in my editorials to the theme of insuring assurance, but that is (and was ) a topic for other days.

The topic today is the interesting parallels between the discipline of Security Engineering in Computer Science and the discipline of Reliability Engineering in Electrical Engineering. At the dawn of electronics, reliability was a major problem. As more and more components were brought together into ever more complex circuits, the distinct discipline of Reliability Engineering developed. The RE focused on the analysis of failure modes and component tolerances and designs that minimized the probability that the system would fail. These folk were dedicated to 'proving the negative' in the sense of reducing the odds that failure would occur. These were the Big Men On Campus in the development labs of their day. The company with the best RE had the quality product that won a future in the market, those without fell by the wayside.

With time, the RE's job became every EE's job. No circuit designer would consider his/her job complete without an analysis of the tolerances and basic protection against failure modes. The RE ceased to exist as a clearly defined hero of the electronics revolution as the knowledge and techniques became common-place in the industry.

For over a dozen years I've been predicting to my Security Engineering friends that they had better enjoy their status, because their day will be passing. I've been saying that they will go the way of the RE and the skills of the SE will be absorbed into the basic skill-set of any software developer. More, I've been saying their duty and goal should be to refine and reduce their specialty to be a fundamental piece, a basic component, of software engineering in particular and systems engineering in general. I still believe this is the future, but this part of the future has been much slower in coming than I thought!

When comparing the jobs of the RE and the SE, one quickly realizes the RE has a significant advantage by working with physical objects! Physical mechanisms are protected by physical security. No RE had to deal with virtual attackers floating in from half-way around the world over the Internet and invisibly pushing the buttons and twisting the knobs in all sorts of strange ways! Even worse, when an RE finished the analysis of a circuit, he/she was sure the circuit wouldn't morph into something else when the box was screwed shut. The SE deals with systems with very mutable software: downloads, extensions, dynamically linked component libraries, and software that appears to rot into garbage in a matter of a relatively few months on the web!

I remember fearing in the mid-80's that I had to develop my secure UNIX system fast, or the developers of our competitors would leave us in the dust. I concentrated on the simplest possible, most easily defended interpretation of the 'Orange Book' criteria for Computer Security. I raced to implement, document, defend the product, barely winning certification before competitors, and just before the decade ended and the PC & Internet revolution sprung upon us. The SE of my era was securing the stand-alone time-shared box. And we had technology that did a solid job of protecting against Trojan Horses in that environment. (For now, let's gloss over the issues of covert channels and the duality of integrity versus confidentiality!) The major remaining problem was the engineering of a process to closely control and manage changes to the mass of security sensitive software in UNIX System V and ultimately to reduce it to a minimum. So I felt confident that the SE was soon to follow the RE as we propagated the concepts of security analysis and the disciplines of code-walkthrough and structured review up the development chain. All developers would learn the facts of life regarding software failure modes and technology scouts like me would be free to explore other territories. Well, the dynamics of the PC and Internet revolutions caused the market for the time-shared and stand-alone computer system to shrivel and the much greater challenges to security coming from the PC and the Internet cause the SE's problems to swell!

But the lesson to learn is that this market will eventually mature, and at that point an assurance of trustworthiness will be a basic element of what the customer considers quality and every developer will live with the disciplines and skills pioneered by today's SEs. We've seen this type of challenge faced and conquered during the revolution in the reliability of electronic devices. We'll see the same thing happen, in time, in information security. And as the consumer became educated that some radios and TVs worked for years, and others spent their lives in the shop, reliability became a key market driver. Today's e-Commerce consumer will slowly but surely learn the value of quality security.

So like the little brother in the beginning of this story, as we mature we learn that our 'big brother' was a hero in his time just as we, if we rise to meet the challenges of our time, can be a hero to those that follow us. It's a daunting challenge. It's a difficult problem in a very dynamic environment, but one that will be met and resolved just as surely as our 'big brothers' met the challenges of their time.

Before I close, let me predict the beginning of the end of the SE's era. This year, Windows 2000 will be released and promoted as the most serious advance in Operating System security ever. Certainly, it has the greatest collection of security features ever! Unfortunately, Microsoft still lives by the 'don't blame us' EULA and that irritates me seriously. Giving consumers security features without the corresponding assurance of trustworthiness is tantamount to giving them a loaded gun. Worse, it's a gun that may not have been test-fired enough! Microsoft has done far more than any other vendor in beta testing and holding up product release until it's "right". I cannot question them on that. But this is a beautifully complex product, virtually ornate with security features! It will take years to ferret out any remaining flaws and the system will be long replaced before all the ways are catalogued for customer to "shoot themselves in the foot". No doubt, however, this is a major step forward in the technology, the marketing of which will certainly swing the consumer toward paying much greater attention to security.

On the other hand, we have the explosion in interest in Linux. As a totally "open source" product, there are no proprietary secrets that we have to trust a vendor to be honest about. And there are literally hundreds of thousands of eyes reviewing the software, daily, around the world, for flaws including security. These developers are seldom properly trained as SE's, but neither are most of Microsoft's developers (yet). It is also a far simpler and easier system to analyze. But there is no one to sue and no one to take to court! There is the potential for the ultimate in security analysis and the development of trust, but also, as yet, no framework for supporting the growth in trust. Truly, this is the ultimate do-it-yourself project in the Linux world: your security is your problem, not the community's. In a sense, this is the ultimate vendor protecting EULA! It is also the ultimate opportunity for the development of an independent software assurance industry and the possible birthplace of real software insurance.

The end of the SE's reign will be marked by the resolution of the battle beginning this year: Will the industry giants like Microsoft internalize the skills of the SE and insure their future by insuring their customers against loss? Even a limited "bond" being posted against loss would be a major improvement! A company with Microsoft's margins could certainly afford this! Or will the "open source" revolutionaries recognize the single greatest advantage they hold: transparency? If they band together to develop the tools to assure that a running system is derived from an analyzed source, they can then begin to amortize the cost of analysis and certification across the millions users and justify the development of a truly open and independent information security insurance industry.

My prediction? If the Linux community can avoid feature proliferation and in-fighting (a tall order) banding together on the development of a truly minimized, well-structured source will naturally lead to the development of independently verified trust. This is the type of reality that will enable the development of a software insurance industry. Certainly, Microsoft's marketing of Win2K security will raise the potential for profits in such a business. If there is sufficient movement in this direction, Microsoft will be forced to either insure their customers on their own or will be forced to open their sources and development methods to inspection by the InfoSec insurance industry. And that will mark the end of the era of the SE!

Copyright 2000 Information Security Analysis LLC. All Rights Reserved.

Return To The Flink Ink Section


This site is not related to the Microsoft Corporation in any way. Windows and the Windows logo are trademarks of the Microsoft Corporation. ActiveWindows is an independent site. The information and sources here are obtained from series of hard work & research.