Busting The Myths
Myth: There's Safety In Small Numbers
Perhaps the most oft-repeated myth regarding Windows vs. Linux security is the claim that Windows has more incidents of viruses, worms, Trojans and other problems because malicious hackers tend to confine their activities to breaking into the software with the largest installed base. This reasoning is applied to defend Windows and Windows applications. Windows dominates the desktop; therefore Windows and Windows applications are the focus of the most attacks, which is why you don't see viruses, worms and Trojans for Linux. While this may be true, at least in part, the intentional implication is not necessarily true: That Linux and Linux applications are no more secure than Windows and Windows applications, but Linux is simply too trifling a target to bother attacking.
This reasoning backfires when one considers that Apache is by far the most popular web server software on the Internet. According to the September 2004 Netcraft web site survey, [1] 68% of web sites run the Apache web server. Only 21% of web sites run Microsoft IIS. If security problems boil down to the simple fact that malicious hackers target the largest installed base, it follows that we should see more worms, viruses, and other malware targeting Apache and the underlying operating systems for Apache than for Windows and IIS. Furthermore, we should see more successful attacks against Apache than against IIS, since the implication of the myth is that the problem is one of numbers, not vulnerabilities.
Yet this is precisely the opposite of what we find, historically. IIS has long been the primary target for worms and other attacks, and these attacks have been largely successful. The Code Red worm that exploited a buffer overrun in an IIS service to gain control of the web servers infected some 300,000 servers, and the number of infections only stopped because the worm was deliberately written to stop spreading. Code Red.A had an even faster rate of infection, although it too self-terminated after three weeks. Another worm, IISWorm, had a limited impact only because the worm was badly written, not because IIS successfully protected itself.
Yes, worms for Apache have been known to exist, such as the Slapper worm. (Slapper actually exploited a known vulnerability in OpenSSL, not Apache). But Apache worms rarely make headlines because they have such a limited range of effect, and are easily eradicated. Target sites were already plugging the known OpenSSL hole. It was also trivially easy to clean and restore infected site with a few commands, and without as much as a reboot, thanks to the modular nature of Linux and UNIX.
Perhaps this is why, according to Netcraft, 47 of the top 50 web sites with the longest running uptime (times between reboots) run Apache. [2] None of the top 50 web sites runs Windows or Microsoft IIS. So if it is true that malicious hackers attack the most numerous software platforms, that raises the question as to why hackers are so successful at breaking into the most popular desktop software and operating system, infect 300,000 IIS servers, but are unable to do similar damage to the most popular web server and its operating systems?
Astute observers who examine the Netcraft web site URL will note that all 50 servers in the Netcraft uptime list are running a form of BSD, mostly BSD/OS. None of them are running Windows, and none of them are running Linux. The longest uptime in the top 50 is 1,768 consecutive days, or almost 5 years.
This appears to make BSD look superior to all operating systems in terms of reliability, but the Netcraft information is unintentionally misleading. Netcraft monitors the uptime of operating systems based on how those operating systems keep track of uptime.
Linux, Solaris, HP-UX, and some versions of FreeBSD only record up to 497 days of uptime, after which their uptime counters are reset to zero and start again. So all web sites based on machines running Linux, Solaris, HP-UX and in some cases FreeBSD "appear" to reboot every 497 days even if they run for years. The Netcraft survey can never record a longer uptime than 497 days for any of these operating systems, even if they have been running for years without a reboot, which is why they never appear in the top 50.
That may explain why it is impossible for Linux, Solaris and HP-UX to show up with as impressive numbers of consecutive days of uptime as BSD -- even if these operating systems actually run for years without a reboot. But it does notexplain why Windows is nowhere to be found in the top 50 list. Windows does not reset its uptime counter. Obviously, no Windows-based web site has been able to run long enough without rebooting to rank among the top 50 for uptime.
Given the 497-rollover quirk, it is difficult to compare Linux uptimes vs. Windows uptimes from publicly available Netcraft data. Two data points are statistically insignificant, but they are somewhat telling, given that one of them concerns the Microsoft website. As of September 2004, the average uptime of the Windows web servers that run Microsoft's own web site (
http://www.microsoft.com) is roughly 59 days. The maximum uptime for Windows Server 2003 at the same site is 111 days, and the minimum is 5 days. Compare this to
http://www.linux.com (a sample site that runs on Linux), which has had both an average and maximum uptime of 348 days. Since the average uptime is exactly equal to the maximum uptime, either these servers reached 497 days of uptime and reset to zero 348 days ago, or these servers were first put on-line or rebooted 348 days ago.
The bottom line is that quality, not quantity, is the determining factor when evaluating the number of successful attacks against software.
Myth: Open Source is Inherently Dangerous
The impressive uptime record for Apache also casts doubt on another popular myth: That open source code (where the blueprints for the applications are made public) is more dangerous than proprietary source code (where the blueprints are secret) because hackers can use the source code to find and exploit flaws.
The evidence begs to differ. The number of effective Windows-specific viruses, Trojans, spyware, worms and malicious programs is enormous, and the number of machines repeatedly infected by any combination of the above is so large it is difficult to quantify in realistic terms. Malicious software is so rampant that the average time it takes for an unpatched Windows XP to be compromised after connecting it directly to the Internet is 16 minutes -- less time than it takes to download and install the patches that would help protect that PC. [3]
As another example, the Apache web server is open source. Microsoft IIS is proprietary. In this case, the evidence refutes both the "most popular" myth and the "open source danger" myth. The Apache web server is by far the most popular web server. If these two myths were both true, one would expect Apache and the operating systems on which it runs to suffer far more intrusions and problems than Microsoft Windows and IIS. Yet precisely the opposite is true. Apache has a near monopoly on the best uptime statistics. Neither Microsoft Windows nor Microsoft IIS appear anywhere in the top 50 servers with the best uptime. Obviously, the fact that malicious hackers have access to the source code for Apache does not give them an advantage for creating more successful attacks against Apache than IIS.
Myths: Conclusions Based on Single Metrics
The remaining popular myths regarding the relative security of Windows vs. Linux are flawed by the fact that they are based only on a single metric -- a single aspect of measuring security. This is true whether the data comes from actual research, anecdotal information or even urban myth.
One popular claim is that, "there are more security alerts for Linux than for Windows, and therefore Linux is less secure than Windows". Another is, "The average time that elapses between discovery of a flaw and when a patch for that flaw is released is greater for Linux than it is for Windows, and therefore Linux is less secure than Windows."
The latter is the most mysterious of all. It is an imponderable mystery how anyone can reach the conclusion that Microsoft's average response time between discovery of a flaw and releasing the fix for that flaw is superior to that of anycompeting operating system, let alone superior to Linux. Microsoft took seven months to fix one of its most serious security vulnerabilities (Microsoft Security Bulletin MS04-007 ASN.1 Vulnerability, eEye Digital Security publishes the delay in advisory AD20040210), and there are flaws Microsoft has openly stated it will neverrepair. The Microsoft Security Bulletin MS03-010 about the Denial Of Service vulnerability in Windows NT says this will never be repaired. More recently, Microsoft stated that it would not repair Internet Explorer vulnerabilities for any operating systems older than Windows XP. Statistically speaking, seven months between discovery and fix might not have an overly dramatic effect on the average response time if you can find enough samples of excellent response times to offset anomalies like this, assuming they are anomalies. But it only takes one case of "never" to upset the statistical average beyond recovery.
This unsolvable mystery aside, consider whether it is meaningful to suggest that Linux is a greater security risk than Windows because the average time between the discovery of vulnerability and the release of a patch is greater with Linux than with Windows. Ask yourself this question: If you experienced a heart attack at this very moment, to which hospital emergency room would you rather be taken? Would you want to go to the one with the best average response time from check-in to medical treatment? Or would you rather be taken to an emergency room with a poor record for average response time, but where the patients with the most severe medical problems always get immediate attention?
One would obviously choose the latter, but not necessarily because the above information proves it is the better emergency room. The latter choice is preferable because it includes two metrics, one of which is more important to you at that precise moment. It is safe to assume that most people would avoid a hospital if they also knew they were likely to die of a heart attack waiting for a doctor to finish setting someone's fractured pinky, no matter how impressive the average response time for every medical emergency may be. The problem is that the above example doesn't give you sufficient information to make the best decision. It doesn't tell you how well the hospital with the best average response time prioritizes its cases. You would also benefit from knowing things like the mortality rate of emergency cases, the average skill of the resident physicians, and so on.
Obviously, the only way to produce a useful recommendation is to gather as many important metrics as possible about local emergency rooms, and then balance these metrics intelligently. It would be inexcusably irresponsible to recommend an emergency room for a heart attack based only on a single metric such as the average response time for all medical emergencies, especially when the other important information that would lead to a more ideal choice is readily available.
It is equally irrational and irresponsible to make a recommendation or a serious business decision based solely on a single metric such as the average elapsed time between a flaw's detection and fix for a given operating system, or the number of security alerts for any given product.
Any single metric is misleading in terms of importance. Let's consider the statement that there are more alerts for Linux software than Windows. This statistic is meaningless because it leaves the most important questions unanswered. Of all the security alerts, how many of the reported flaws represent a tangible risk? How severe are those risks? How likely are they to expose your systems to serious damage? These questions are important. Which is preferable: An operating system with 100 flaws that expose your systems to little or no damage and cannot be exploited by anyone except local users with a valid login account and physical access to your machine? Or would you prefer an operating system with 1 critical flaw that allows any malicious hacker on the Internet to wipe out all of the information on your server? Clearly, the number of alerts alone is not a meaningful metric for the security of one operating system over another.