Is Open Source Secure?
Some thoughts to one of the most commonly asked questions about open source software.
Here is a commonly voiced concern about open source software:
At the simplest level, the answer is just NO. This statement involves a basic misconception. While anyone can propose a contribution to an open source project, any actual change must go through a small core group of maintainers first. Getting a change incorporated into an open source project is thus rather like getting an article published in a scientific journal.
At a deeper level, though, there are some real reasons why open source software can actually be more secure than their commercial counterparts. They have to do with how possible flaws are discovered and the motivations of parties involved.
Discovering Bugs and Security Flaws
Open source software is more heavily tested than their commercial counterparts. A commercial software vendor might employ one, maybe two, testers per programmer. Those testers often follow pre-written test scripts and address a finite range of use cases.
An open source project, however, might have tens or even hundreds of thousands of downloaders around the world. Each one of the downloaders may subject the software to a different use or operating environment. Any one of them can discover a bug or security flaw and submit those reports back to the project.
Once the Flaw is Found
What happens once the flaw is found?
If the code is openly available, a bug or security flaw can be definitively proven. A patch could be suggested and posted on the internet. At this point, there is only incentive to fix the flaw and no real incentive to hold back. Users would obviously want the problem solved as soon as possible. Any service provider would use it as an opportunity to demonstrate its value added by fixing its clients' installations. The project's core maintainers, while potentially embarrassed, have no further reason to cover up the flaw. In fact, if they do not fix the problems quickly enough, other community members can take matters into their own hands and start an open source derivative of the original project which incorporates the needed fixes.
In the commercial world, in contrast, things are not always so clear cut. The code is not available, so bugs or security flaws may be hard to prove. The vendor may have already made promises to its customers or statements to the press about how secure or bug-free its product is. Admitting to the bug or flaw may incur negative publicity or, worse, wrath of the marketing department. So, if the flaw cannot be definitively proven or is not generally known yet, maybe it would be better just to keep it under wraps and fix it in the next release?
But now I'm just being paranoid. This never happens in real life, right?