Virus patches aren't being applied

January 24, 2001
By Robert Lemos ZDNet News

Connecting a computer to the Internet can be dangerous business. Just ask Troy Hall.

Three months ago, the experienced system administrator put his newest Linux server online. Three days later, an intruder had taken control of it.

"My first reaction was, 'How did he find me so fast?'" said Hall, who manages the servers and computers for his family-run e-business.

Exploiting a flaw in Washington University's FTP server, the intruder had cracked the server's security and set up shop. Hall's system--in this case, Red Hat 6.2--shipped with the software that contained the hole. While a patch for the vulnerability was readily available on Red Hat's Web site, like many other system administrators, Hall just didn't get around to installing it.

The scenario, repeated daily at sites across the Internet, exposes a common security problem largely unknown to the general public. Although software makers routinely release "fixes" designed to plug holes and reassure worried customers, these antidotes are often ignored by administrators in charge of the affected systems--if they are aware of the problem at all.

As a result, this easily avoidable problem has reached near-epidemic proportions. Making matters more frustrating is knowing that so many losses could have been easily avoided with a few mundane but crucial steps.

"I would put patching in the top two things an admin can do to secure their computers," said Lance Spitzner, coordinator for the security group Honeynet Project. The others are turning off unnecessary services, like serving up Web pages, allowing file transfers, or responding to remote logins.

No chance
Without patching, computers connected to the Internet don't have a chance, Spitzner said. Data from the Honeynet Project suggest that almost 80 percent of all un-patched servers wouldn't last more than three weeks before being compromised by Internet attackers.

Every day, the underground elements of the Internet use scanners to find servers susceptible to what security experts have taken to calling the "exploit du jour." Depending on the type of flaw and connection speed, anywhere from tens of thousands to millions of Internet addresses can be scanned in a single day. If even less than 1 percent of the scanned servers connected to each address are vulnerable, that can still result in thousands of defenseless possibilities for a hacker.

Failing to responsibly patch computers led to 99 percent of the 5,823 Web site defacements last year, up 56 percent from the 3,746 Web sites defaced in 1999, according to security group Attrition.org.

The price of neglect can be especially costly for large corporations. Fortune 1,000 companies lost more than $45 billion from the theft of proprietary information in 1999, according to a study released by the American Society for Industrial Security and consulting firm PricewaterhouseCoopers. The majority of those hacking incidents hit tech companies, with nearly 67 individual attacks and the average theft ringing up about $15 million in losses.

Security experts emphasize the need for simple maintenance rather than new technology to address security issues.

"It is more cost effective to maintain your systems and apply patches than spending that money," said William Fithen, senior member of the technical staff at the Computer Emergency Response Team Coordination Center at Carnegie Mellon University. "If you are not making the investment in keeping your machines up-to-date, you are wasting your money."

Ramen still spreading
This week, for example, the Ramen Linux worm spread widely, exploiting the same vulnerability--among others--that befell Hall, as he made the mistake of configuring his embryonic Web server while it was connected to the Internet.

Originally discovered in June 2000, the wu-FTP vulnerability still remains a common way for the lowest form of Internet attacker, the "script kiddie," to gain entry to servers. Three months after its discovery, the popularity of the flaw caused a jump in defacements of Web sites based on servers using the Linux operating system.

And that's only a single flaw.
Lack of training can be blamed in many preventable cases, said Chris Klaus, the chief technology officer with security firm Internet Security Systems.

"To some extent, a lot of it is ignorance," he said. "When we walk into most companies, they aren't even aware they should be patching anything."

Another reason is simple math: A company with 1,000 computers may have, say, four administrators. If each machine needs to be patched once a month, that's still ten or so machines that need to be maintained by each administrator every day. That's in addition to all the other problems that crop up.

"Part of the problem is that you get so tied up in the day-to-day work, and there are so many different (software) packages, that is can be a full-time job just to keep up with all the patches," lamented Hall.

Several initiatives hope to change that.
As part of the National Plan for Critical Infrastructure Defense, the Clinton administration proposed a Scholarship for Service plan that will start next year. In exchange for up to $24,000 in tuition fees over two years to help train them in computer security, students have to serve the same amount of time securing government servers after graduation.

Industry associations are also trying to raise awareness. The System Administration, Networking and Security Institute (SANS) last summer released its top 10 list of security holes that need to be plugged.

Although those initiatives are aimed at educating system administrators, it's an uphill battle. Newly trained administrators continue to enter the industry, requiring that the software companies repeatedly start from zero.

A case study: Microsoft IIS
The Remote Data Services flaw in Microsoft's Internet Information Server may be the best example of how repeated education can fail. In June 1998, the RDS flaw became public. By July 1998, Microsoft had created a patch. Despite its release, the RDS flaw remained a favorite way for script kiddies to cut their teeth. In an attempt to reduce the number of vulnerable servers on the Internet, Microsoft re-released the RDS advisory in July 1999 and again in July 2000. Still, defacements of Web sites based on Microsoft's Internet Information Server remain much higher, as a percentage, than attacks on Linux.

Microsoft's poor showing stems largely from the company's success. While sites running on the Apache Web server outnumber those running on Microsoft's IIS about 3-to-1, sites running Microsoft software are appealing targets for hackers. A successful attack generally draws more attention, giving greater notoriety for publicity seekers.

Another way that software makers have attempted to boost security across the Internet has been to create automated updating programs to make patching a no-brainer. Most operating system makers have already added some form of automated updates to their products.

Even so, system administrators frequently fail to use the utilities. For some, it's a matter of time. For others, the complexity of the their networks leads to information overload.

Automated updating isn't foolproof either, CERT's Fithen said.

Because each software maker tends to produce and distribute its own updates, major bottlenecks can occur when several companies attempt to update at the same time.

Still, if any technology is going to solve the problem, automated updates are at the top of the list, Fithen said.

"Scale is a problem that we know how to deal with," he said. "Getting people to do something that they don't want to is not."

Back To The Study