Would a Proprietary OpenSSL Have Been More Secure than Open Source?

The OpenSSL Heartbleed vulnerability has resurrected the age-old debate of whether or not open source code is more or less secure than proprietary code. Before putting on your open source or proprietary jerseys and launching into this (frankly not-very-productive) fight, first consider a few things.

Those software companies that have proprietary software give their compelling reasons for why it is more secure. We must take their word for the security of their code, unless they release copies of objective 3rd-party audits and/or there are security incidents that occur with their software that reveal their inadequacies. There have been many incidents with, and there have been few released audits of, proprietary software.

Those who think open source is more secure point to the ability to pick apart the security of open source code by a much wider range of folks, with many differing levels of capabilities, than what would be available for proprietary code, which would logically seem to lead to many more eyes finding more security flaws more quickly. And so, should’ve/would’ve found the OpenSSL flaw prior to production launch. However, even with the eyes of the world *available* to find the flaws, a simple error went unnoticed, at least by the good guys and gals, for over two years.

What is important to consider, beyond the fact that the flawed code was open source, was that OpenSSL was/is arguably the overwhelmingly most-used security software code on the Internet, basically as the de facto standard for communications network and devices, but not strictly on the Internet since it is also used in routers and other networked devices that may be within non-public networks. It doesn’t really matter at this point if it is open source or proprietary; it is being used by reportedly hundreds of millions of devices and servers. The obligation, and importance, for effective security controls is the same regardless.

The important consideration is this: why wasn’t this very simple error (and if you’ve ever programmed, you know this was one of the most basic, simple errors of Programming 101 that you could ever make; in general not doing a length check) caught before the code was widely distributed within production?

1) Why didn’t the programmer do a proper code review?
Dr. Seggelmann, who admitted to the error, had worked on previous versions of the code, with no errors. He has also caught security flaws in previous versions and fixed them prior to their production release. He is a brilliant man, whose work is almost spotless with regard to demonstrating coding due diligence, until this very simple, very elementary, coding error resulted in possibly the most far-reaching and biggest error with the biggest impacts (that we’ll never truly know since exploits leave no indicators) that we’ve ever seen. This was not some lazy, sloppy programmer. This was an expert who made a simple, but hugely-significant, coding mistake.

LESSON: Anyone, anywhere, working on Open Source or proprietary code, could have made this mistake. Humans are not perfect; none of us. We will all make mistakes. (We hope that when we do, it does not impact hundreds of millions of systems, and people, worldwide.)

2) Why didn’t the quality check folks catch the error?
The team that maintains the code is within the Internet Engineering Task Force (IETF), who volunteer their time generally with the goal of creating code that is better and more secure to use on the Internet. They are are supposed to review and test code to approve of it prior to releasing it to production, and have processes in place (supposed to) to catch this type of very simple error. So, why didn’t they catch it? Were they looking for errors in the more technical, and complex, areas of the code without thoroughly reviewing the more mundane areas? Did they only do a spot check of the code (which I know from my audits many programmers are now doing to be more “agile” and quick with their reviews)? Did they not go into depth because they knew Dr. Seggelman was really smart, and had always done a great job, and assumed that with his demonstrated past diligence that he would not have made an error? Perhaps a combination of all these considerations?

LESSON: Quality assurance and security reviews must follow consistent procedures, with the reviewers using more scrutiny, not less, for every line of code. Assumptions (may or may not be the case with Heartbleed) cannot be made based upon the programmer’s history of coding excellence. Review must not be cut short, or be a spot check, in order to save time.

3) With the eyes of the world available for reviewing, why did it take 2 years to be discovered?
Even with a review team within the IETF, it would seem that code used by hundreds of millions should be immediately under the scrutiny of the freelance/volunteer/(whatever you want to call them) reviewers. Aren’t there teams waiting to pick apart open source code that is used by hundreds of millions? This is one of the arguments for open source; that the coding and software experts in the public would discover flaws right away that got by the IETF programmers. (With proprietary you don’t have the ability for the public to scrutinize the code; one of the common arguments against proprietary code.)  Wouldn’t the U.S. CERT be interested in keeping an eye on such code? Wouldn’t various other groups? Isn’t it incredible that none of the good guys discovered / knew about this flaw right away and reported it? It took 2 years! Really incredible.

Lesson: Even if the world is *able* to review and pick apart code, it does not mean anyone actually will. Independent review groups need to commit to reviewing security code when it is released.

Some Other Basic Lessons Re-Learned
Some of the hard lessons necessary to be re-learned by Heartbleed, regardless of whether you use open source or proprietary:

  1. Programmers must follow consistent and repeatable processes for creating code, and building security controls into the code.
  2. The change control/approval process must include an independent review from others within the team, and/or from outside, who will consistently follow a review and testing process that will catch security errors, especially ones as basic as the Heartbleed flaw.
  3. When open source code is used for widely-used purposes, there should be outside groups that jump right on the code as soon as it goes public to pick it apart and try to find the most complex, down to the most basic, flaw.
  4. Proprietary code also needs to have a reliable, independent, security review process.

 

Tags: , , , , , , , ,

Leave a Reply