Thursday, June 26, 2008

Internet Gridlock

July/August 2008

Internet Gridlock

Video is clogging the Internet. How we choose to unclog it will have far-reaching implications.

By Larry Hardesty

An obscure blogger films his three-year-old daughter reciting the plot of the first Star Wars movie. He stitches together the best parts--including the sage advice "Don't talk back to Darth Vader; he'll getcha"--and posts them on the video-sharing website YouTube. Seven million people download the file. A baby-faced University of Minnesota graduate student with an improbably deep voice films himself singing a mind-numbingly repetitive social-protest song called "Chocolate Rain": 23 million downloads. A self-described "inspirational comedian" films the six-minute dance routine that closes his presentations, which summarizes the history of popular dance from Elvis to Eminem: 87 million downloads.

Video downloads are sucking up bandwidth at an unprecedented rate. A short magazine article might take six minutes to read online. Watching "The Evolution of Dance" also takes six minutes--but it requires you to download 100 times as much data. "The Evolution of Dance" alone has sent the equivalent of 250,000 DVDs' worth of data across the Internet.

Star Wars: Episode IV according to a three-year-old.

And YouTube is just the tip of the iceberg. Fans of Lost or The Office can watch missed episodes on network websites. Netflix now streams videos to its subscribers over the Internet, and both Amazon and Apple's iTunes music store sell movies and episodes of TV shows online. Peer-to-peer file-sharing networks have gradu­ated from transferring four-minute songs to hour-long ­Sopranos episodes. And all of these videos are higher quality--and thus more bandwidth intensive--than YouTube's.

Last November, an IT research firm called Nemertes made headlines by reporting that Internet traffic was growing by about 100 percent a year and that in the United States, user demand would exceed network capacity by 2010. Andrew Odlyzko, who runs the Minnesota Internet Traffic Studies program at the University of Minnesota, believes that the growth rate is closer to 50 percent. At that rate, he says, expected improvements in standard network equipment should keep pace with traffic increases.

But if the real rate of traffic growth is somewhere between Nemertes's and Odlyzko's estimates, or if high-definition video takes off online, then traffic congestion on the Internet could become much more common. And the way that congestion is relieved will have implications for the principles of openness and freedom that have come to characterize the Internet.

Whose Bits Win?
The Internet is a lot like a highway, but not, contrary to popular belief, a superhighway. It's more like a four-lane state highway with traffic lights every five miles or so. A packet of data can blaze down an optical fiber at the speed of light, but every once in a while it reaches an intersection where it has the option of branching off down another fiber. There it encounters a box called an Internet router, which tells it which way to go. If traffic is light, the packet can negotiate the intersection with hardly any loss of speed. But if too many packets reach the intersection at the same time, they have to queue up and wait for the router to usher them through. When the wait gets too long, you've got congestion.

The transmission control protocol, or TCP--one of the Internet's two fundamental protocols--includes an algorithm for handling congestion. Basically, if a given data link gets congested, TCP tells all the computers sending packets over it to halve their transmission rates. The senders then slowly ratchet their rates back up--until things get congested again. But if your computer's transmission rate is constantly being cut in half, you can end up with much less bandwidth than your broadband provider's ads promised you.

Sometimes that's not a problem. If you're downloading a video to watch later, you might leave your computer for a few hours and not notice 10 minutes of congestion. But if you're using streaming audio to listen to a live World Series game, every little audio pop or skip can be infuriating. If a router could just tell which kind of traffic was which, it could wave the delay-sensitive packets through and temporarily hold back the others, and everybody would be happy.


But the idea that an Internet service provider (ISP) would make value judgments about the packets traveling over its network makes many people uneasy. The Internet, as its name was meant to imply, is not a single network. It's a network of networks, most of which the average user has never heard of. A packet traveling long distances often has to traverse several networks. Once ISPs get in the business of discriminating between packets, what's to prevent them from giving their own customers' packets priority, to the detriment of their competitors'? Suppose an ISP has partnered with--or owns--a Web service, such as a search engine or a social-networking site. Or suppose it offers a separate service--like phone or television--that competes with Internet services. If it can treat some packets better than others, it has the means to an unfair advantage over its own rivals, or its partners', or its subsidiaries'.

The idea that the Internet should be fair--that it shouldn't pick favorites among users, service providers, applications, and types of content--is generally known as net neutrality. And it's a principle that has been much in the news lately, after its apparent violation by Comcast, the second-largest ISP in the United States.

Last summer, it became clear that Comcast was intentionally slowing down peer-to-peer traffic sent over its network by programs using the popular file-sharing protocol BitTorrent. The Federal Communications Commission agreed to investigate, in a set of hearings held at Harvard and Stanford Universities in early 2008.

It wasn't BitTorrent Inc. that had complained to the FCC, but rather a company called Vuze, based in Palo Alto, CA, which uses the BitTorrent protocol--perfectly legally--to distribute high-­definition video over the Internet. As a video distributor, Vuze is in competition, however lopsided, with Comcast. By specifically degrading the performance of BitTorrent traffic, Vuze argued, Comcast was giving itself an unfair advantage over a smaller rival.

At the Harvard hearing, Comcast executive vice president David Cohen argued that his company had acted only during periods of severe congestion, and that it had interfered only with traffic being uploaded to its network by computers that weren't simultaneously performing downloads. That was a good indication, Cohen said, that the computers were unattended. By slowing the uploads, he said, Comcast wasn't hurting the absent users, and it was dramatically improving the performance of other applications running over the network.

Whatever Comcast's motivations may have been, its run-in with Vuze graphically illustrates the conflict between congestion management and the principle of net neutrality. "An operator that is just managing the cost of its service by managing congestion may well have to throttle back heavy users," says Bob Briscoe, chief researcher at BT's Networks Research Centre in Ipswich, England. "An operator that wants to pick winners and chooses to say that this certain application is a loser may also throttle back the same applications. And it's very difficult to tell the difference between the two."


To many proponents of net neutrality, the easy way out of this dilemma is for ISPs to increase the capacity of their networks. But they have little business incentive to do so. "Why should I put an enhancement into my platform if somebody else is going to make the money?" says David Clark, a senior research scientist at MIT's Computer Science and Artificial Intelligence Laboratory, who from 1981 to 1989 was the Internet's chief protocol architect. "Vuze is selling HD television with almost no capital expenses whatsoever," Clark says. Should an ISP spend millions--or billions--on hardware upgrades "so that Vuze can get into the business of delivering television over my infrastructure with no capital costs whatsoever, and I don't get any revenues from this?" For ISPs that also offer television service, the situation is worse. If an increase in network capacity helps services like Vuze gain market share, the ISP's massive capital outlay could actually reduce its revenues. "If video is no longer a product [the ISP] can mark up because it's being delivered over packets," Clark says, "he has no business model."

As Clark pointed out at the Harvard FCC hearing, ISPs do have the option of defraying capital expenses by charging heavy users more than they charge light users. But so far, most of them have resisted that approach. "What they have been reluctant to do is charge per byte," says Odlyzko, "or else have caps on usage--only so many gigabytes, beyond which you're hit with a punitive tariff." The industry "is strangely attached to this one-size-fits-all model," says Timothy Wu, a Columbia Law School professor who's generally credited with coining the term "network neutrality." "They've got people used to an all-you-can-eat pricing program," Wu says, "and it's hard to change pricing plans."

Absent a change in pricing structures, however, ISPs that want to both manage congestion and keep regulators happy are in a bind. Can technology help get them out of it?

The Last Bit
To BT's Bob Briscoe, talk of ISPs' unfair congestion-management techniques is misleading, because congestion management on the Internet was never fair. Telling computers to halve their data rates in the face of congestion, as the TCP protocol does, is fair only if all those computers are contributing equally to the congestion. But in today's Internet, some applications gobble up bandwidth more aggressively than others. If my application is using four times as much bandwidth as yours, and we both halve our transmission rates, I'm still using twice as much bandwidth as you were initially. Moreover, if my gluttony is what caused the congestion in the first place, you're being penalized for my greed. "Ideally, we would want to allow everyone the freedom to use exactly what they wanted," Briscoe says. "The problem is that congestion represents the limit on other people's freedom that my freedom causes."

Briscoe has proposed a scheme in which greedy applications can, for the most part, suck up as much bandwidth as they want, while light Internet users will see their download speeds increase--even when the network is congested. The trick is simply to allot every Internet subscriber a monthly quota of high-priority data packets that get a disproportionately large slice of bandwidth during periods of congestion. Once people exhaust their quotas, they can keep using the Internet; they'll just be at the mercy of traffic conditions.

So users will want to conserve high-priority packets. "A browser can tell how big a download is before it starts," Briscoe says, and by default, the browser would be set to use the high-priority packets only for small files. For tech-savvy users who wanted to prioritize some large file on a single occasion, however, "some little control panel might allow them to go in, just like you can go in and change the parameters of your network stack if you really want to."

Just granting users the possibility of setting traffic priorities themselves, Briscoe believes, is enough to assuage concerns about network neutrality. "I suspect that 95 percent of customers, if they were given the choice between doing that themselves or the ISP doing it for them, would just say, Oh, sod it, do it for me," Briscoe says. "The important point is they were asked. And they could have done it themselves. And I think those 5 percent that are complaining are the ones that wish they were asked."

In Briscoe's scheme, users could pay more for larger quotas of high-priority packets, but this wouldn't amount to the kind of usage cap or "punitive tariff" that Odlyzko says ISPs are wary of. Every Internet subscriber would still get unlimited downloads. Some would just get better service during periods of congestion.

In order to determine which packets counted against a user's quota, of course, ISPs would need to know when the network is congested. And that turns out to be more complicated than it sounds. If a Comcast subscriber in New York and an EarthLink subscriber in California are exchanging data, their packets are traveling over several different networks: Comcast's, EarthLink's, and others in between. If there's congestion on one of those networks, the sending and receiving computers can tell, because some of their packets are getting lost. But if the congestion is on Comcast's network, EarthLink doesn't know about it, and vice versa. That's a problem if the ISPs are responsible for tracking their customers' packet quotas.

Briscoe is proposing that when the sending and receiving computers recognize congestion on the link between them, they indicate it to their ISPs by flagging their packets--flipping a single bit from 0 to 1. Of course, hackers could try to game the system, reprogramming their computers so that they deny that they've encountered congestion when they really have. But a computer whose congestion claims are consistently at odds with everyone else's will be easy to ferret out. Enforcing honesty is probably not the biggest problem for Briscoe's scheme.

Getting everyone to agree on it is. An Internet packet consists of a payload--a chunk of the Web page, video, or telephone call that's being transmitted--and a header. The header contains the Internet addresses of the sender and receiver, along with other information that tells routers and the receiving computer how to handle the packet. When the architects of the Internet designed the Internet protocol (IP), they gave the packet header a bunch of extra bits, for use by yet unimagined services. All those extra bits have been parceled out--except one. That's the bit Briscoe wants to use.

Among network engineers, Briscoe's ideas have attracted a lot of attention and a lot of support. But the last bit is a hard sell, and he knows it. "The difficult [part] in doing it is getting it agreed that it should be done," he says. "Because when you want to change IP, because half of the world is now being built on top of IP, it's like arguing to change--I don't know, the rules of cricket or something."

Someday, the Internet might use an approach much like ­Briscoe's to manage congestion. But that day is probably years away. A bandwidth crunch may not be.

Strange Bedfellows
Most agree that the recent spike in Internet traffic is due to video downloads and peer-to-peer file transfers, but nobody's sure how much responsibility each one bears. ISPs know the traffic distributions for their own networks, but they're not disclosing them, and a given ISP's distribution may not reflect that of the Internet as a whole. Video downloads don't hog bandwidth in the way that many peer-to-peer programs do, though. And we do know that peer-to-peer traffic is the type that Comcast clamped down on.

Nonetheless, ISPs and peer-to-peer networks are not natural antagonists. A BitTorrent download may use a lot of bandwidth, but it uses it much more efficiently than a traditional download does; that's why it's so fast. In principle, peer-to-peer protocols could help distribute server load across a network, eliminating bottle­necks. The problem, says Mung Chiang, an associate professor of electrical engineering at Princeton University (and a member of last year's TR35), is the mutual ignorance that ISPs and peer-to-peer networks have maintained in the name of net neutrality.

ISPs don't just rely on the TCP protocol to handle congestion. They actively manage their networks, identifying clogged links and routing traffic around them. At the same time, computers running BitTorrent are constantly searching for new peers that can upload data more rapidly and dropping peers whose transmissions have become sluggish. The problem, according to Chiang, is that peer-to-peer networks respond to congestion much faster than ISPs do. If a bunch of computers running peer-to-peer programs are sending traffic over the same link, they may all see their downloads slow down, so they'll go looking for new peers. By the time the ISP decides to route around the congested link, the peer-to-peer traffic may have moved elsewhere: the ISP has effectively sealed off a wide-open pipe. Even worse, its new routing plan might end up sending traffic over links that have since become congested.

But, Chiang says, "suppose the network operator tells the content distributor something about its network: the route I'm using, the metric I'm using, the way I'm updating my routes. Or the other way around: the content distributor says something about the way it treats servers or selects peers." Network efficiency improves.

An industry consortium called the P4P Working Group--led by Verizon and the New York peer-to-peer company Pando--is exploring just such a possibility. Verizon and Pando have tested a protocol called P4P, created by Haiyong Xie, a PhD student in computer science at Yale University. With P4P, both ISPs and peer-to-peer networks supply abstract information about their network layouts to a central computer, which blends the information to produce a new, hybridized network map. Peer-to-peer networks can use the map to avoid bottlenecks.

In the trial, the P4P system let Verizon customers using the Fios fiber-optic-cable service and the Pando peer-to-peer network download files three to seven times as quickly as they could have otherwise, says Laird Popkin, Pando's chief technology officer. To some extent, that was because the protocol was better at finding peers that were part of Verizon's network, as opposed to some remote network.

Scared Straight?
Every technical attempt to defeat congestion eventually runs up against the principle of net neutrality, however. Even though ­BitTorrent Inc. is a core member of the P4P Working Group, its chief technology officer, Eric Klinker, remains leery of the idea that peer-to-peer networks and ISPs would share information. He worries that a protocol like P4P could allow an ISP to misrepresent its network topology in an attempt to keep traffic local, so it doesn't have to pay access fees to send traffic across other networks.

Even David Clark's proposal that ISPs simply charge their customers according to usage could threaten neutrality. As Mung Chiang points out, an ISP that also sold TV service could tier its charges so that customers who watched a lot of high-definition Internet TV would always end up paying more than they would have for cable subscriptions. So the question that looms over every discussion of congestion and neutrality is, Does the government need to intervene to ensure that everyone plays fair?

For all Klinker's concerns about P4P, BitTorrent seems to have concluded that it doesn't. In February, Klinker had joined representatives of Vuze and several activist groups in a public endorsement of net neutrality legislation proposed by Massachusetts congressman Ed Markey. At the end of March, however, after the Harvard hearings, BitTorrent and Comcast issued a joint press release announcing that they would collaborate to develop methods of peer selection that reduce congestion. Comcast would take a "protocol­-agnostic" approach to congestion management--targeting only heavy bandwidth users, not particular applications--and would increase the amount of bandwidth available to its customers for uploads. BitTorrent, meanwhile, agreed that "these technical issues can be worked out through private business discussions without the need for government intervention."

The FCC, says Clark, "will do something, there's no doubt, if industry does not resolve the current impasse." But, he adds, "it's possible that the middle-of-the-road answer here is that vigilance from the regulators will impose a discipline on the market that will cause the market to find the solution."

That would be welcome news to Chiang. "Often, government legislation is done by people who may not know technology that well," he says, "and therefore they tend to ignore some of the feasibility and realities of the technology."

But Timothy Wu believes that network neutrality regulations could be written at a level of generality that imposes no ­innovation-­killing restrictions on the market, while still giving the FCC latitude to punish transgressors. There's ample precedent, he says, for broad proscriptions that federal agencies interpret on a case-by-case basis. "In employment law, we have a general rule that says you shouldn't discriminate, but in reality we have the fact that you aren't allowed to discriminate unless you have a good reason," he says. "Maybe somebody has to speak Arabic to be a spy. But saying you have to be white to serve food is not the same thing."

Ultimately, however, "the Internet's problems have always been best solved collectively, through its long history," Wu says. "It's held together by people being reasonable ... reasonable and part of a giant community. The fact that it works at all is ridiculous."

Larry Hardesty is a Technology Review senior editor.

No comments: