Thursday, June 12, 2008
Containing Internet Worms
A new method could stop Internet worms from spreading.
By Erica Naone
The spread of Internet worms could be stopped early on by using a new method to watch computers for the behavior exhibited by infected hosts, according to research recently published in IEEE Transactions on Dependable and Secure Computing. Although other methods exist to protect against worms, the new strategy is designed to minimize interference with users' normal work patterns, says Ness Shroff, a professor in the electrical-engineering department at Ohio State University, who was involved in the research. The researchers envision the technique being used in corporate networks, where it could identify computers that need to be quarantined and checked for infection.
Internet worms can be enlisted to launch denial-of-service attacks, which flood a website so that legitimate users can't access it, or install back doors that can be used to create botnets. Large numbers of infected computers could significantly slow Internet traffic, even if the worms do nothing more than spread.
Credit: Technology Review
The Purdue University and Ohio State method of preventing worms from spreading works primarily for a class of worms that scans the Internet randomly in search of vulnerable host machines to infect. One such worm was Code Red, which infected more than 359,000 computers in less than 14 hours in 2001, and ultimately caused an estimated $2.6 billion in damages. Although this type of worm has been around for some time, Kurt Rohloff, a scientist in the distributed systems technology group at BBN Technologies, says that it is still dangerous. These "are a very simple class of worms that's very easy to develop and program, but at the same time, they're not as easy to contain," he says. "If we could understand these fairly simple but still problematic worms, we could hopefully address the more so-called devious worms."
The researchers base their strategy on a new model that they designed for how worms spread. Many existing models are based on an analogy to the spread of epidemics, Shroff says, but they are more accurate at later stages of an infection. The researchers' model was particularly designed for accuracy in the early stages of infection, and it revealed that the key to whether or not a worm can spread successfully is the total number of times that an infected host scans the Internet in attempts to find new hosts to infect.
While other methods of containing worms have focused on monitoring computers for changes in the rate at which they scan the Internet from moment to moment, Shroff says that this can interfere with users' daily activities. "Scan rates fluctuate a lot, so if you go online, you may scan a lot of times during a very short period of time, and then not scan at all," he says. "We felt that the scan rate was too restrictive and could interfere with the normal operation of the network." By monitoring the volume of scans over a longer period of time, he says, it's possible to contain worms while keeping the threshold too high for ordinary users to raise alarms. Software could monitor the number of scans each computer on a network sends and quarantine any computers that exceed that number. Shroff hopes that changing the criteria for suspecting infection in this way will reduce the likelihood that legitimate scans of the Internet would be flagged as worm activity.
"In a sense, what we're doing is taking advantage of the fact that this worm is trying a lot of things and missing many times, and each time it misses, it's giving out some information," Shroff says. Although the system is designed for dealing with scanning worms that seek vulnerable hosts at random, the researchers have also adapted it for worms that target their attacks at specific local networks.
Shroff believes that the system could best be deployed on corporate networks, particularly in situations in which extra computers are available that could cover a workload while possibly infected computers are examined. It might not work as well for small businesses or on home networks, because taking a computer offline could be too large of a disruption for users, he says.
Rohloff says that he could imagine such a system being effective, but he cautions, "The bias, of course, would be that it would protect local networks from infections that are already present in the network. It wouldn't do as much for protecting networks from infections that come from the outside." He adds that while the researchers' model and initial simulations look good, he would be curious to see a more thorough analysis of how often the system suspects a computer of being infected with a worm when no worm is actually present.
The Purdue and Ohio State researchers suggest that future work could search for ways to adapt their tools for ever more targeted worms. Shroff says that while he and his colleagues are now concentrating on stopping worms at the level of host computers, another possible direction could be to make software that would allow routers to watch for suspicious traffic patterns. While such an approach could allow a relatively large number of computers to be monitored from a single point, it would also require significant changes to how routers operate. While they currently keep track of only the destination of Internet traffic, they would have to begin keeping track of its source as well.
No comments:
Post a Comment