Friday, May 30, 2008

Nasa schickt Ersatzteile für ISS-Klo

Nasa schickt Ersatzteile für ISS-Klo

Das einzige, defekte stille Örtchen im Weltall hatte die Astronauten zeitweilig in schwere Nöte gebracht. Nun soll die US-Raumfähre "Discovery" mit Ersatzteilen an Bord für Erleichterung sorgen.


Der Shuttle soll der am Samstag (23.02 MEZ) in Cape Canaveral im US-Bundesstaat Florida starten, teilte die US-Weltraumbehörde Nasa mit. Die Ersatzteile, darunter eine Pumpe, stammten aus Russland, da es sich um ein WC russischer Bauart handelt. Die Ersatzteile seien bereits im Laderaum von "Discovery" verstaut worden.

Die Probleme mit der Toilette dauern bereits seit vergangener Woche an. Zeitweise konnten die Astronauten auf der angedockten russischen Sojus-Kapsel austreten - allerdings sei die Kapazität des WC mittlerweile erschöpft, hieß es. Derzeit müssten die Männer mit einer nicht näher bezeichneten Toiletten-"Alternative" vorlieb nehmen. "Dies ist aber sehr unbequem, weil es erhebliche manuelle Interventionen voraussetzt", deutete die Nasa an.

Nachdem der Shuttle am Montag an der Raumstation angedockt hat, müssen die ISS-Astronauten und die "Discovery"-Besatzung die Pumpe und die anderen Ersatzteile installieren. Hauptaufgabe der Mission ist es, weitere Bauteile für das japanische Labor "Kibo" zu liefern.

Deutschland ist schon verkauft

Out of Office

Deutschland ist schon verkauft

von Claus Hecking (Dubai)

Vor Dubais Küste wird die Welt verschachert - in Form einer aufgeschütteten Inselgruppe. Ein Österreicher hat die Insel Germany erworben und will es dort in einer klimatisierten Straße täglich regnen lassen.


Das neue Deutschland ist noch dünn besiedelt. Eine Möwe spaziert im Landesinneren umher, an der Küste haben sich ein paar Krebse niedergelassen, doch vom Menschen ist weit und breit keine Spur. Dabei gibt es nirgends im alten Deutschland so sonniges Klima und so unberührte Strände wie hier auf Germany, 20 Bootsminuten vor Dubais Küste.

Josef Kleindienst aus Mistelbach bei Wien ist der neue Eigentümer von Germany, einer der künstlichen Inseln von "The World", dem wohl elitärsten Bauprojekt der Gegenwart. Vier Jahre lang haben Schwimmbagger Sand im Golf aufgeschüttet. 1,4 Mrd. Euro hat der Auftraggeber, der staatliche Immobilienkonzern Nakheel, dafür bezahlt. Jetzt ist das Projekt fertig, jetzt formen die 300 Sandhaufen zusammen den Grundriss einer neun mal sieben Kilometer großen Weltkarte - und jeder für sich einen Staat oder eine Region der Erde.

"Dubai hatte einfach nicht mehr genug Strand", sagt Hamza Mustafa. "und die Investoren haben nach exklusiven Projekten wie diesem hier gehungert." Der "Director The World", wie sich der Mittdreißiger auf seiner Visitenkarte nennt, hat jede der bislang angebotenen Inseln an den Mann gebracht, für Beträge zwischen 15 und 400 Mio. $. "Der Preis richtete sich nach der Nachfrage", sagt Mustafa. Und einige Inseln waren extrem begehrt.

Die Inselgruppe The World vor Dubai ist nur mit Boot oder Hubschrauber zu erreichen
Die Inselgruppe The World vor Dubai ist nur mit Boot oder Hubschrauber zu erreichen

Um Taiwan etwa lieferten sich Investoren aus National- und Rotchina eine monatelange Bieterschlacht; am Ende gelang es einem taiwanischen Superreichen mit einer Millionenofferte, seine Heimat vor der Besetzung zu bewahren. Australien wurde hingegen von der Landkarte getilgt: Seine kuwaitischen Eigentümer haben es in Oqyana umbenannt. Die ganz großen politischen Konflikte vermeidet Nakheel von vornherein: Israel existiert in "The World" ebenso wenig wie Palästina, Korea ist sicherheitshalber wiedervereinigt. Und die Insel an der Stelle, wo Tibet liegen müsste, heißt Lhasa.

Kleindienst ist vergleichsweise billig davongekommen. Zwischen 20 und 22 Mio. $ soll er für die 38.647 Quadratmeter Germany bezahlt haben. So hatte der Immobilien-Tycoon auch noch genug Geld für Austria übrig. Im Juni werden ihm die zwei Sandhaufen übergeben, ab Herbst darf er sie bebauen, ganz ohne Planfeststellungsverfahren. Und so unterschiedlich wie Österreich und Deutschland sehen auch Kleindiensts Konzepte für seine beiden Inseln aus.

Auf dem ganz und gar gebirgslosen Austria will der 44-Jährige das "Empress Sissi" errichten, ein Luxushotel im Zeichen der berühmten Kaiserin. "Es soll Hospitality symbolisieren: die berühmte österreichische Gastfreundschaft", sagt Kleindienst.

Deutschland hingegen stehe "für andere Werte: Technologie und Umweltschutz". Und so soll nun auf Germany ein kleines Forschungszentrum für erneuerbare Energien gebaut werden, mithilfe des Fraunhofer-Instituts - und offenbar auch der Berliner Politik. "Neulich hat uns eine Delegation von Bundestagsabgeordneten besucht, die wollen uns bei dem Projekt unterstützen", erzählt Kleindienst. Zwischen 50 und 60 Prozent von Germanys Energieversorgung sollen dem Plan zufolge aus Solaranlagen kommen, die Module überall auf der Insel aufgestellt werden.

Dass ein bisschen Greentech nicht reicht, um die Touristen in Scharen nach Germany zu locken, weiß Kleindienst. Und so hat er sich eine besondere Attraktion ausgedacht: eine künstlich klimatisierte Straße. "In dieser Gasse wird es Temperaturen wie in Europa haben und auch immer wieder regnen", sagt der Deutschlandchef. "Die Araber lieben das. Wenn es hier regnet, gehen die Menschen auf die Straße und tanzen."

Welt-Boss Mustafa ist von Kleindiensts Ideen angetan. Er hat beide Konzepte bewilligt; gefiele ihm ein Plan nicht, könnte er die Bebauung einer Insel stoppen. "Der Erfolg von The World hängt davon ab, was die Investoren draus machen", rechtfertigt Mustafa sein Vetorecht. "Dieses Projekt ist für Dubai besonders wichtig, denn es ist einzigartig."

Aber wohl nicht mehr lange. Nakheel plant bereits "The Universe", eine noch größere, noch teurere Insellandschaft in Form von Sonne, Mond und Sternen. Die Welt ist Dubai nicht genug.

Tuesday, May 27, 2008

Just How Realistic is "Government 2.0"?

Just How Realistic is "Government 2.0"?

OK, I promise this is the last thing I write about gurus for at least a year or so. I was ready to give up the topic after last week’s post, but then I heard Don Tapscott on NPR’s “Talk of the Nation” this afternoon. Don is a pleasant fellow, and I admire his ability to grab onto new topics quickly—from the business implications of the Internet, to the “net generation,” to “wikinomics,” the subject and title of his most recent book. I used to be a couple of spots ahead of him in (my own) guru list, but this year he passed me by a couple of spots (though, I must point out, there is no statistical significance to small differences). He was talking on the radio today about the transformation of government by Web 2.0/Enterprise 2.0.

I don’t doubt that these tools will have some impact on how governmental information and services are delivered. I also don’t have any doubt that they will not drive as much change as Don (and his co-author Anthony Williamson as quoted in a CIO Insight article ) apparently believe they will. Don said that “government 2.0” was the most important change for government in more than a century. Williamson (and Tapscott, to a slightly lesser degree) “foresees Web 2.0 technologies being employed to transform service delivery, make smarter policies, flatten silos and, most importantly, reinvigorate democracy.”

Of course, there may be a few hitches in this miraculous transformation. One caller who works in the U.S. federal government called in to Don today, saying something like, “I can’t even get a replacement for my six year old computer—how will the federal government be able to transform itself with wikis?” Don basically replied, “Sure, there will be some cultural obstacles, but this sort of change is inevitable.”

I don’t want to get into whether a few interesting technologies can transform the most hidebound of organizations, or even if these 2.0 tools somehow are more important than nuclear power and weapons, the internal combustion engine, and airplanes as tools that can transform government. No, my question is whether these exaggerations, which are typical of pronouncements emanating from the heights of gurudom, are helpful or not.

One could argue that they are helpful because they motivate us to strive for greater impact from new technologies or management approaches. Perhaps they help us keep our “eyes on the prize.” Without such optimism, maybe the pressures of everyday life would keep us from ever accomplishing anything. Maybe people are just looking for something new and different—what’s objectionable about that?

On the other hand, this sort of techno-utopian argument could be harmful. It might lead, for example, to disenchantment with the technology when it doesn’t lead to the promised result. Companies and organizations might end up spending more on the technologies than their utility warrants. If gurus were ever held accountable for their proclamations (and they hardly ever are), it might also lower the credibility of all management experts.

Of course, the proper role of the eternal optimist is not a new issue; it’s been discussed in literature since Candide wrote about Dr. Pangloss. But are there any new wrinkles? What do you think—should management and technology gurus moderate their expressed views, or is it the more utopian and visionary the better?

Where Spam Is Born

May/June 2008

Where Spam Is Born

Where does all that malicious Internet content come from?

By David Talbot

Monday, May 26, 2008

Automated recognition of online images

May 24th, 2008 Posted by Roland Piquepaille @ 9:40 am


An international team of computer scientists has developed a new image-recognition software. They found that 256 to 1,024 bits of data were enough to identify the subject of an image. The researchers said this ‘could lead to great advances in the automated identification of online images and, ultimately, provide a basis for computers to see like humans do.’ As an example, they’ve stored about 13 million images picked on the Web and stored them in a searchable database of just 600 megabytes. The researchers added that using such small amounts of data per image makes it possible to search for similar pictures through millions of images on your PC in less than a second. But read more…

MIT results on image compression

Let’s see how the method works. As shown on the left, it is possible to represent images with a very small number of bits and still maintain the information needed for recognition. “Short binary codes might be enough for recognition. This figure shows images reconstructed using an increasing number of bits and a compression algorithm similar to JPEG. The number on the left represents the number of bits used to compress each image. Reconstruction is done by adding a sparsity prior on image derivatives, which reduces typical JPEG artifacts. Many images are recognizable when compressed to have around 256-1024 bits. (Credit: Torralba et al.)

This research work has been led by Antonio Torralba, an assistant professor at MIT ’s Computer Science and Artificial Intelligence Laboratory. Torralba collaborated with two other assistant professors in computer science, Rob Fergus of the Courant Institute of Mathematical Sciences at New York University, and
Yair Weiss of the Hebrew University in Jerusalem.

Here is a quote from Torralba about this project. “We’re trying to find very short codes for images, so that if two images have a similar sequence [of numbers], they are probably similar–composed of roughly the same object, in roughly the same configuration. If one image has been identified with a caption or title, then other images that match its numerical code would likely show the same object (such as a car, tree, or person) and so the name associated with one picture can be transferred to the others. With very large amounts of images, even relatively simple algorithms are able to perform fairly well in identifying images this way.”

MIT results on image compression

But how can you retrieve images using such a small amount of data? The illustration on the left shows representative retrieval results. “Each row shows the input image and the 12 nearest neighbors using ground truth distance using the histograms of objects present on each image (Credit: Torralba et al.) Please note that other methods are employed.

This research work will be presented in June 2008 at the IEEE Computer Vision and Pattern Recognition conference (CVPR 2008) in Anchorage, Alaska. The researchers will present a technical paper named “Small Codes and Large Image Databases for Recognition.” Here is the link to this paper (PDF format, 8 pages, 20.18 MB), from which the images seen here have been extracted.

And here is the beginning of the abstract. “The Internet contains billions of images, freely available online. Methods for efficiently searching this incredibly rich resource are vital for a large number of applications. These include object recognition, computer graphics, personal photo collections, online image search tools. In this paper, our goal is to develop efficient image search and scene matching techniques that are not only fast, but also require very little memory, enabling their use on standard hardware or even on handheld devices.”

And if you want to learn more — and have fun — please visit the 80 Million Tiny Images project page set by Torralba.

Finally, here is the conclusion of the MIT news release. “Torralba stresses that the research is still preliminary and that there will always be problems with identifying the more-unusual subjects. It’s similar to the way we recognize language, Torralba says. ‘There are many words you hear very often, but no matter how long you have been living, there will always be one that you haven’t heard before. You always need to be able to understand [something new] from one example.’”

Sources: David Chandler, MIT News Office, May 21, 2008; and various websites

You’ll find related stories by following the links below.

Roland Piquepaille lives in Paris, France, and he spent most of his career in software, mainly for high performance computing and visualization companies. For disclosures on Roland's industry affiliations, click here.

Buses as mobile sensing platforms?

Posted by Roland Piquepaille @ 10:15 am

According to European researchers, modern buses could be used as mobile sensing platforms, sending out live information to be used to control traffic and detect road hazards. The 3.83 million euro EU-funded MORYNE project was completed in March 2008 with a test in Berlin, Germany. During this test, the researchers ‘equipped city buses with environmental sensors and cameras, allowing the vehicles to become transmitters of measurements, warnings and live or recorded videos to anyone allowed to access the data.’ But read more…

MORYNE architecture

You can see above the MORYNE architecture. (Credit: MORYNE Project) By the way, the acronym was picked from the official name of the project, “EnhanceMent of public transpORt efficiencY through the use of mobile seNsor nEtworks.” I wonder how many people were necessary to settle on this acronym.

Here is a link to the MORYNE Project website and here were its objectives.

  • The development of an approach for new safety- and efficiency-oriented transport management and traffic management
  • The development and validation of technologies for appropriate sensing, information processing, communication, interfaces
  • The development of an in-laboratory demonstrator
  • The validation of the proposed concepts through field testing
  • The analysis of potential impacts (social, economic, environmental)

MORYNE communication system

Above is an illustration describing the MORYNE communication system used during the tests in Berlin in March 2008. (Credit: MORYNE Project)

MORYNE bus on board unit

And you can see how the on board unit (OBU) of the buses used during these tests. (Credit: MORYNE Project) This OBU performs four main tasks: interfacing all IT devices on the bus; lane position algorithm, and number of lanes besides bus lane calculation; fog and ice warning calculation; driver interface to automatically warn him about congestion and environmental alarms.

Now, let’s look at the ICT Results article to discover how the buses, equipped with humidity and temperature sensors, were tested. “One pair of sensors checks the road surface while the other pair analyses the air. The sensors were selected and designed to resist to pollution. They were also designed to quickly acclimatise to the environment, as buses may have to go through tunnels, tiny dark roads, bridges and city parks over the course of a few minutes.”

And how all this information is transmitted to a traffic control center? “The data gathered by the sensors is processed on the bus, using a small but very powerful computer. The computer can then warn the bus driver if for example foggy or icy conditions are imminent. The computer can also send alerts to a public transport control centre via a variety of wireless connections, including mobile radio systems, wifi or wimax networks, and UMTS (3G). The control centre can in turn warn nearby buses of dangerous conditions through the same wireless channels. The system can also be set up to warn city traffic-monitoring centres of road conditions, making these mobile environmental sensors another way to collect information on top of an existing network.”

If such a system is installed in your city one day, be careful! “Another innovation stemming from the project is the bus-mounted road-cam, a powerful video acquisition and processing device that can detect traffic conditions around a bus. They system can be used to spot unauthorised cars in a bus lane and inform the police. The same video system can also be used to count the number of vehicles in adjoining lanes and measure their speed, helping to alert a city traffic-monitoring centre of road conditions on the ground, in real time.”

For more information, you can read the MORYNE booklet (PDF format, 8 pages, 1.19 MB). But if you’re really want to know more about this project, you can look at a document introducing the MORYNE Berlin demonstration held on March 11-12, 2008
(PDF format, 149 pages, 8.03 MB). The illustrations above have been respectively extracted from pages 7, 20 and 35 of this document.

Sources: ICT Results, May 26, 2008; and various websites

You’ll find related stories by following the links below.

Roland Piquepaille lives in Paris, France, and he spent most of his career in software, mainly for high performance computing and visualization companies. For disclosures on Roland's industry affiliations, click here.

CeMAT 2008: Germany is World's Largest Exporter of Intralogistics Technologies

26.05.2008 09:06

CeMAT 2008: Germany is World's Largest Exporter of Intralogistics Technologies

Berlin (ots) - Germany is one of the world's top investment locations and the largest exporter of intralogistics technologies. The leading intralogistics trade fair CeMAT 2008 is taking place in Hannover, Germany from May 27-May 31. It will feature 1,100 exhibitors from all over the world. Visitors will be able to learn about investment possibilities and industry developments in the host country.

Intralogistics describes the internal flow of materials between different logistics nodes in a company. Germany is Europe's largest intralogistics market and the third largest worldwide. According to the industry association VDMA, German intralogistics sales for 2007 approached EUR 20 billion, an increase of 17% over the previous year. Germany is also the world's largest exporter of intralogistics technologies. In 2006, its exports approached EUR 10 billion, 20% more than the previous year and a total nearly twice as great as the second-place country.

The increasing need to move equipment and goods faster in a globalized economy is forcing businesses to seek solutions from intralogistics companies, creating demand for new market entrants. For example, the largest single-country export client of German intralogistics companies is the United States with estimated sales of over EUR 1 billion in 2007.

Emerging markets also play a growing role in the German intralogistics sector. Sales to China jumped to an estimated EUR 475 million in 2007, an increase of over 30% compared with 2005. Exports to Russia exceeded an estimated EUR 500 million in 2007, an increase of 40% over 2005. In total, the 27-member EU is the largest market for intralogistics services from Germany.

Furthermore, companies choose Germany because of its educated workforce. Over 90% of German employees in the intralogistics sector, a workforce that approaches 100,000 individuals, have either a professional qualification or a university degree.

Invest in Germany is the inward investment promotion agency of the Federal Republic of Germany. It provides investors with comprehensive support from site selection to the implementation of investment decisions. Invest in Germany can be found in the "Logistics Network" booth A16 in hall 12.

Originaltext: Invest in Germany digital press kits: http://www.presseportal.de/pm/55240 press kits via RSS: http://www.presseportal.de/rss/pm_55240.rss2

DHL nimmt Luftfracht-Drehkreuz in Leipzig in Betrieb

Verkehr | 26.05.2008

DHL nimmt Luftfracht-Drehkreuz in Leipzig in Betrieb

Die Deutsche Post hat ihr europäisches Luftfracht-Drehkreuz in Leipzig eröffnet. Die strukturschwache Region hofft auf Tausende neue Arbeitsplätze. Doch nicht alle freuen sich über das modernste Logistikzentrum der Welt.

Gerade einmal zwei Jahre hat es gebraucht, im Süden des Regional-Airports Leipzig/Halle einen kompletten Frachtflughafen zu bauen. Der verfügt über direkten Zugang zu einem wichtigen Autobahnkreuz – und hat einen Bahnhof, um die Fracht auch per Schiene an- und abzutransportieren. Damit ist der Standort gleichberechtigt neben Wilmington in den USA und Hongkong in Asien die dritte Drehscheibe für das Frachtgeschäft des Weltmarktführers DHL. 300 Millionen Euro wurden investiert.

Waren-Container am DHL-Drehkreuz (Quelle: Deutsche Post World Net)Bildunterschrift: Großansicht des Bildes mit der Bildunterschrift: Das Drehkreuz gilt als Vorzeigeprojekt in der Region, in der die Arbeitslosigkeit hoch ist

Post-Vorstandschef Frank Appel sieht den Flughafen Leipzig/Halle "von fundamentaler Bedeutung für den Ausbau unseres globalen Expressgeschäftes." Man sei froh, dass dieses riesige Projekt voll im Zeit- und Kostenplan geblieben und nun erfolgreich an den Start gegangen sei, sagte der seit März amtierende Manager der Deutschen Welle. "Wir sehen hier Chancen für ein enormes Wachstumspotential für uns durch die Nähe zu Osteuropa und eben in der Mitte Europas – deswegen ist der Standort hier sehr, sehr wichtig für uns." Appel verweist auf den zentralen Verkehrsknotenpunkt, auf das sehr große Potential an Arbeitskräften und ein politisches Umfeld, "was uns sehr stark unterstützt hat, diese Planung sehr schnell umzusetzen."

Hoffnung auf blühende Landschaften

Das Areal im Nordwesten der sächsischen Halb-Millionenstadt Leipzig gilt schon länger als Vorzeigeobjekt beim Aufbau Ost. Zahlreiche Unternehmen haben sich hier bereits angesiedelt. So bauen Porsche und BMW hier Autos, der Internethändler Amazon und der Computerhersteller Dell betreiben hier Versandzentren. Das neue DHL-Drehkreuz dürfte nun weitere Unternehmen aus Industrie, Handel und Logistik anlocken, so dass zu den 3500 geplanten Jobs beim Paketversender mit bis zu 7000 weiteren Jobs gerechnet wird.

Fracht-Entladung in der Nacht (Quelle: Deutsche Post World Net)Bildunterschrift: Großansicht des Bildes mit der Bildunterschrift: Die meisten Mitarbeiter müssen nachts arbeiten. Vollzeitstellen gibt es wenige

Wichtig für eine Region, in der die Arbeitslosenquote mit 15 Prozent deutlich über dem deutschen Durchschnitt liegt. Leipzigs Bürgermeister Burkhard Jung führt die Vorteile des Standorts zum einen auf die geografische Lage zurück. Man sei hier eben mitten in Europa. Zum anderen habe man darauf gesetzt, modernste Infrastruktur über und unter der Erde zu bauen. Dies sei die einzige Chance, solche Ansiedlungen anzulocken. "Dann eine ganz schnelle, flexible Verwaltung – und, wenn ich das mal so sagen darf – der Charme der Leipziger, der ist unglaublich", so Jung.

Gefragte Jobs

Über 50.000 Bewerbungen hat es für die ersten 2200 Jobs von DHL hier gegeben. Das Unternehmen weist mit Stolz darauf hin, dass zwei Drittel der Beschäftigten aus der Region kämen, viele von ihnen seien zuvor arbeitslos gewesen. Dennoch: Die Lohnkosten liegen rund 20 Prozent unter denen am Flughafen Brüssel, von woher das Drehkreuz nach Leipzig verlagert wurde. Nur wenige haben einen Vollzeitjob, die meisten arbeiten nachts in Teilzeit. Dennoch ist Michael Leix, einer der Glücklichen, die ausgewählt wurden, zufrieden. "Das ist mein Arbeitsplatz", sagt der 40-Jährige "das ist mein neuer Lebensmittelpunkt." Er fühle sich sehr wohl, die Arbeit mache Spaß und gebe ihm eine sichere Perspektive. "Also voll die Nummer eins auf meiner Liste."

Angst vor Lärm

Keinesfalls die Nummer eins ist der Frachtflughafen für viele Anwohner. Sie hatten gegen die Investition geklagt, weil sie neben dem Lärm auch einer Wertminderung ihrer Anwesen befürchten. Einige von ihnen demonstrierten auch vor dem Eingang zum Gelände. Ein Konflikt, den der Flughafenchef von Leipzig/Halle, Eric Malitzke, schon länger zu schaffen macht.

Grafik des Frachtflughafens (Quelle: Deutsche Post World Net)Bildunterschrift: Großansicht des Bildes mit der Bildunterschrift: Viele Bewohner fürchten den Fluglärm. 60 Starts und Landungen soll es nachts geben

Man nehme die Sorgen der Anwohner sehr ernst, sagt der 34-jährige Manager. "Ich glaube, das Wichtigste ist, zu diesem Zielkonflikt zu stehen und zu sagen: Ja, das ist ein Problem." Bis heute arbeiteten schon 2200 Leute hier. Die meisten seien vorher arbeitslos gewesen, viele sind schon über 50 Jahre. "Ich lese viele Studien darüber, dass Fluglärm möglicherweise krank macht. Ich bin mir sicher: Langzeitarbeitslosigkeit macht auch krank", so der Jungmanager.

Die Auseinandersetzung wird weitergehen – immerhin starten und landen nunmehr 60 DHL-Maschinen jede Nacht an dem neuen Drehkreuz.

Henrik Böhme, zurzeit Leipzig

Sunday, May 25, 2008

Saving Energy in Data Centers

Tuesday, March 11, 2008

Saving Energy in Data Centers

A group at Microsoft Research attacks the problem on two fronts.

By Erica Naone

Data centers are an increasingly significant source of energy consumption. A recent EPA report to Congress estimated that U.S. servers and data centers used about 61 billion kilowatt-hours of electricity in 2006, or 1.5 percent of the total electricity used in the country that year. (See also "Data Centers' Growing Power Demands.") Concern about the amount of energy eaten up by data centers has led to a slew of research in the area, including new work from Microsoft Research's Networked Embedded Computing group, which was showcased last week in Redmond, WA, at Microsoft's TechFest 2008. The work attacks the energy-consumption problem in two ways: new algorithms make it possible to free up servers and put them into sleep mode, and sensors identify which servers would be best to shut down based on the environmental conditions in different parts of the server room. By eliminating hot spots and minimizing the number of active servers, Microsoft researchers say that the system could produce as much as 30 percent in energy savings in data centers.

Monitoring the conditions: This sensor, a prototype developed by the Networked Embedded Computing group at Microsoft Research, is sensitive to heat and humidity. The group envisions using sensors like these to monitor servers in data centers, enabling significant energy savings. The sensors could also be used in homes to manage the energy use of appliances.
Credit: Microsoft Research

The sensors, says Feng Zhao, principal researcher and manager of the group, are sensitive to both heat and humidity. They're Web-enabled and can be networked and made compatible with Web services. Zhao says that he envisions the sensors, which are still in prototype form, as "a new kind of scientific instrument" that could be used in a variety of projects. In a data center, the idiosyncrasies of a building and individual servers can have a big effect on how the cooling system functions, and therefore on energy consumption. Cooling, Zhao notes, accounts for about half the energy used in data centers. (He believes that the sensors, which he says could sell for $5 to $10 apiece, could be used in homes as well as in data centers, where they could work in tandem with a Web-based energy-savings application.)

Another aspect of the research, explains Lin Xiao, a researcher with the group, is new algorithms designed to manage loads on the servers in a more energy-efficient way. Traditionally, load-balancing algorithms are used to keep traffic evenly distributed over a set of servers. The Microsoft system, in contrast, distributes the load to free up servers during off-peak times so that those servers can be put into sleep mode. The algorithms are currently designed for connection servers, which are employed with services for which users may log in for sessions of several hours, such as IM services or massively multiplayer online games. Because long sessions are common, shifting loads requires complex planning in order to avoid disconnecting users and other problems with quality of service. Xiao says that the group has developed two types of algorithms: load-forecasting algorithms, which predict a few hours ahead of time how many servers will need to be working, and load-skewing algorithms, which distribute traffic according to the predictions and power down relatively empty servers.

The beauty of the system, Xiao says, comes when the two systems work in tandem. The sensors monitor the servers to make sure they're not being overcooled (a common problem in data centers, he says, since people often set the cooling system conservatively, to protect the equipment). In addition, the sensor system watches for hot spots, which can make the air-conditioning system work inefficiently. This information is then used by the load-skewing algorithms. Knowing that you want to shut down 400 servers is one thing. The sensor helps determine which ones to shut down.

Jonathan Koomey, a staff scientist at Lawrence Berkeley National Laboratory and the author of several reports on data-center energy consumption, says that he sees this type of research as one step toward a big-picture vision for data centers. "There's a focus by the big players in the data-center area to try to get to a point where they can shift computing loads around, dependent on not just electricity prices, but also weather and other variations." Ultimately, Koomey says, this could mean shifting loads not only within a data center, but also from region to region.

The group ran simulations using data from the IM service Windows Live Messenger and found that the system could produce about 30 percent in energy savings, depending on the physical structure of the data center and on how the system is configured. Zhao says that the savings produced by the group's system does depend on how the user chooses to deal with some inherent trade-offs. For example, he says, Microsoft is working on several areas of research that will help in modeling the unexpected, such as load spikes. However, a user might choose to keep more servers than is strictly necessary powered on as a reserve in case of a spike, at a corresponding loss in energy savings. "Our research shows the trade-off between energy saving and performance hit, and lets users choose the right balance," Zhao says.

Other researchers are working on developing techniques for shutting down servers at optimal times. Xiao says that the Microsoft group's work is distinguished by its focus on connection servers and the problems that come with shifting loads when users typically stay logged in for many hours.

"Servers are only being used [about] 15 percent of their maximum computing ability, on average," Koomey says, "so that means a lot of capital sitting around." He expects companies to be very motivated to implement the research that they do in this area, since "they want to make better use of their capital," he says. Wasting energy and computing power doesn't make good business sense.

What’s Online

Shoe Seller’s Secret of Success


Published: May 24, 2008
from NYT
HARVARD BUSINESS ONLINE posted a blog item on Monday morning praising the online shoe retailer Zappos. In short order, it spread around the Internet like kudzu.
Alex Eben Meyer


The item, “Why Zappos Pays New Employees to Quit — And You Should Too,” written by William C. Taylor, applauded Zappos for what he called its unmatched customer service and for its insistence on thinking beyond the current fiscal quarter (discussionleader.hbsp.com/taylor).

“There are plenty of companies with a hot product, a hip style or a fast-rising stock price that are, essentially, one-trick ponies — they deliver great short-term results, but they don’t stand for anything big or important for the long term,” Mr. Taylor wrote.

Not so at Zappos, which Mr. Taylor says will record $1 billion in sales this year, up from $70 million five years ago. Zappos delivers shoes, handbags and other products ordered over the Internet. Delivery is free and fast, and customers can return unwanted products at no charge.

“This company is fanatical about great service,” Mr. Taylor wrote, “not just satisfying customers, but amazing them.”

In an age of clueless, surly or impossible-to-reach customer service personnel, Zappos’s fanaticism helps it stand out. It is all in the hiring. After a few weeks of intensive training, new call-center employees are offered $1,000 on top of what they have earned to that point if they want to quit.

The theory, according to Mr. Taylor, is that the people who take the money “obviously don’t have the sense of commitment” Zappos requires from its employees. The company says about 10 percent of its trainees take the offer.

Zappos’s reputation preceded the Harvard Business item, of course. “Honestly, I’m getting to the point of burnout on hearing about Zappos in marketing circles,” Robert John Ed wrote on his blog (redmarketer.com). But he still felt compelled to note this week that “the secret to Zappos’s success is real commitment that permeates through every customer and every transaction, every time.”

GLASS CEILING Why are women having such a hard time achieving parity on Wall Street? John Carney (dealbreaker.com) wrote this week that the question persisted “despite some truly incredible efforts on the parts of Wall Street institutions to recruit, retain and promote women.”

Mr. Carney surmises that the gender gap in math and science may be responsible for the gender gap on Wall Street. That is not because of any deficiency on the part of women but because, on average, they are simply not as interested in math and science as men are.

“Could it be,” he asks, “that many women who leave Wall Street — or decline to show up there in the first place — are simply doing other things because they want to?”

LATE TO THE BANDWAGON Taco Bell’s latest online marketing campaign is not going over well with some bloggers. The campaign, “Why Pay Mo’ ” includes a depiction of four men dancing to a rap song, their heads replaced with pictures of a penny, a nickel, a dime and a quarter. “Yo, cuz,” the rappers implore, “go find some coins in your couch, O.K.?”

Viewers can replace the coin heads with pictures of their own. And a “rap name generator” asks, 1988-style, whether you are a “homeboy” or a “flygirl,” requests that you choose a menu item, then gives you a name like “Sir Biggie D-Dizzle Tak-O.” It is unclear what you are supposed to do with your new handle.

“Sorry, Taco Bell, but you missed the rap boat when Ronald McDonald got down with the homies in, like, 2003,” Angela Natividad wrote (adrants.com). DAN MITCHELL

How Are Humans Unique?

Idea Lab

How Are Humans Unique?


Published: May 25, 2008
from NYT
Human beings do not like to think of themselves as animals. It is thus with decidedly mixed feelings that we regard the frequent reports that activities once thought to be uniquely human are also performed by other species: chimpanzees who make and use tools, parrots who use language, ants who teach. Is there anything left?

You might think that human beings at least enjoy the advantage of being more generally intelligent. To test this idea, my colleagues and I recently administered an array of cognitive tests — the equivalent of nonverbal I.Q. tests — to adult chimpanzees and orangutans (two of our closest primate relatives) and to 2-year-old human children. As it turned out, the children were not more skillful overall. They performed about the same as the apes on the tests that measured how well they understood the physical world of space, quantities and causality. The children performed better only on tests that measured social skills: social learning, communicating and reading the intentions of others.

But such social gifts make all the difference. Imagine a child born alone on a desert island and somehow magically kept alive. What would this child’s cognitive skills look like as an adult — with no one to teach her, no one to imitate, no pre-existing tools, no spoken or written language? She would certainly possess basic skills for dealing with the physical world, but they would not be particularly impressive. She would not invent for herself English, or Arabic numerals, or metal knives, or money. These are the products of collective cognition; they were created by human beings, in effect, putting their heads together.

When you look at apes and children in situations requiring them to put their heads together, a subtle but significant difference emerges. We have observed that children, but not chimpanzees, expect and even demand that others who have committed themselves to a joint activity stay involved and not shirk their duties. When children want to opt out of an activity, they recognize the existence of an obligation to help the group — they know that they must, in their own way, “take leave” to make amends. Humans structure their collaborative actions with joint goals and shared commitments.

Another subtle but crucial difference can be seen in communication. The great apes — chimpanzees, bonobos, gorillas and orangutans — communicate almost exclusively for the purpose of getting others to do what they want. Human infants, in addition, gesture and talk in order to share information with others — they want to be helpful. They also share their emotions and attitudes freely — as when an infant points to a passing bird for its mother and squeals with glee. This unprompted sharing of information and attitudes can be seen as a forerunner of adult gossip, which ensures that members of a group can pool their knowledge and know who is or is not behaving cooperatively. The free sharing of information also creates the possibility of pedagogy — in which adults impart information by telling and showing, and children trust and use this information with confidence. Our nearest primate relatives do not teach and learn in this manner.

Finally, human infants, but not chimpanzees, put their heads together in pretense. This seemingly useless play activity is in fact a first baby step toward the creation of distinctively human social institutions. In social institutions, participants typically endow someone or something with special powers and obligations; they create roles like president or teacher or wife. Presidents and teachers and wives operate with special powers and obligations because, and only because, we all believe and act as if they fill these roles and have these powers. Two young children pretending together that a stick is a horse have thus taken their first step on the road not just to Oz but also toward inhabiting human institutional reality.

Human beings have evolved to coordinate complex activities, to gossip and to playact together. It is because they are adapted for such cultural activities — and not because of their cleverness as individuals — that human beings are able to do so many exceptionally complex and impressive things.

Of course, humans beings are not cooperating angels; they also put their heads together to do all kinds of heinous deeds. But such deeds are not usually done to those inside “the group.” Recent evolutionary models have demonstrated what politicians have long known: the best way to get people to collaborate and to think like a group is to identify an enemy and charge that “they” threaten “us.” The remarkable human capacity for cooperation thus seems to have evolved mainly for interactions within the group. Such group-mindedness is a major cause of strife and suffering in the world today. The solution — more easily said than done — is to find new ways to define the group.

Michael Tomasello is co-director of the Max Planck Institute for Evolutionary Anthropology.


Networks of the Future?

Matias Costa for The New York Times

Martin Varsavsky, in his Madrid home, started FON to let members piggyback on the wireless access of other members worldwide. But his big idea is encountering some obstacles.

Networks of the Future?


SITTING on the porch at Finca Torrenova, his 800-acre retreat on this Mediterranean island, Martin Varsavsky ticks off the credentials of the group of Internet entrepreneurs finishing lunch at a nearby table.

“He has 40 million uniques, he has 50 million, and he has 8 million,” Mr. Varsavsky says, referring to the number of visitors to Web sites owned by his guests — many of whom are also business associates and have joined him for several days of brainstorming about the digital future.

These days, commercial victory on the Internet is all about scale, and Mr. Varsavsky, a 48-year-old from Argentina, can be forgiven for speaking longingly and in detail about his peers’ achievements. No stranger to success — he has had a tidy crop of new media and telecommunications hits since the 1990s — he is still struggling to bring his newest Internet venture to fruition.

Three years ago, aiming to create a global wireless network, he founded FON, a company based in Madrid that wants to unlock the potential power of the social Internet. FON’s gamble is that Internet users will share a portion of their wireless connection with strangers in exchange for access to wireless hotspots controlled by others.

The swaps, in theory, would allow “Foneros” to have ubiquitous, global wireless access while traveling for business or pleasure. But despite $55.2 million in backing from such corporate heavyweights as Google and BT, the former British Telecom, as well as newer enterprises like Skype and a handful of venture capital firms, FON and Mr. Varsavsky are still missing a crucial ingredient: scale.

At the moment, there are just 830,000 registered Foneros around the world, and only 340,000 active Wi-Fi hotspots run FON software. Because it’s built upon the concept of sharing Wi-Fi access, FON works well only if there are Foneros everywhere.

And as he struggles to expand the FON network, Mr. Varsavsky faces particular hurdles now that the Internet’s commercial side has reached a crossroads. Born a few decades ago as an anarchic, digital version of a barn-raising, the wireless Internet is now a battleground between two giant technology consortiums seeking to rein in the Web’s chaotic openness in favor of creating uniform, global access built upon wireless data networks.

The two camps, known as WiMax and L.T.E., for “long-term evolution,” are both top-down, highly structured approaches that will cost billions of dollars to build and may close a door on some of the architectural openness that led to the rapid growth of the Internet.

But their potential advantage is that closed standards can encourage the kind of growth that offers more access to mainstream consumers and business users, as occurred when Microsoft imposed a measure of conformity on software development.

For his part, Mr. Varsavsky hopes that FON can offer a middle ground — deploying the original, bottom-up strengths of the early Internet movement and at the same time wedding them to a more formal, corporate approach to expansion.

Although FON faces huge obstacles in realizing those ambitions, the company also has a growing number of devotees.

“The wireless Internet market today is fragmented and complex — it can be accessed through 3G operators, through WiMax, through private hotspots, through paid hotspots and through corporate networks,” said Michael Jackson, a partner at Mangrove Capital in London and a former FON board member. “In summary, it is a nightmare for a consumer. FON can and will change this.”

But others have their doubts.

“I know that the people at Google like this idea,” said John Saw, the chief technology officer at Clearwire, the WiMax start-up of Craig McCaw, which recently announced a $14.5 billion joint venture to build a nationwide WiMax network with Sprint, Google, Intel, Comcast and others. “But we’re skeptical.”

Undeterred, Mr. Varsavsky says that what he currently lacks in scale he can make up for in huge cost savings, particularly because FON avoids the expensive proposition of having to build a worldwide network of cellular towers and Wi-Fi nodes from scratch.

“Our army of Foneros is a much more efficient way of distributing a signal,” he says. “We believe WiMax operators will be happy to have some customers use their services for free and save billions in infrastructure deployment.”

MR. VARSAVSKY has worked overtime trying to line up more high-profile partners for FON. To that end, he traveled to Cupertino, Calif., last fall to meet with Steve Jobs, the chief executive of Apple.

During that 90-minute meeting, Mr. Varsavsky says, the two men discussed why a partnership might make sense.

Apple has sold millions of its Wi-Fi routers to residential customers, and its community of Wi-Fi users who share router access would be an ideal platform for FON. For his part, Mr. Jobs had developed an interest in Wi-Fi sharing because of the expanding number of iPhone users who are often frustrated by locked Wi-Fi access points.

But, Mr. Varsavsky says, from the moment that he and Mr. Jobs met, their discussion devolved into an argument. (Mr. Jobs did not respond to requests to comment on the meeting.)

At the outset, Mr. Varsavsky recalled, Mr. Jobs asked sharply, “Who needs your community?” and “Why should British Telecom bother to do a deal with you, and why shouldn’t people just leave their routers open for sharing?”

Skip to next paragraph
FON, via Associated Press

FON hopes its routers, open to anyone in the network, will help its service catch on.

Mr. Varsavsky says he responded, “Why should you bother to do a deal with AT&T? Shouldn’t iPhones just be connected freely with any cellphone network?”

Mr. Varsavsky says he left the meeting with the uncomfortable feeling that Apple might end up as a competitor rather than as a partner. But it wasn’t only because of Mr. Jobs’s legendary stubbornness that the Apple meeting apparently went awry. Mr. Varsavsky’s own substantial ego also came into play — something he freely acknowledges when he talks about how he first got into business.

“My father died and my mother was saying, ‘Martin, get a job, get a job,’ ” he recalls. “And I would go to job interviews and they would say, ‘How do you see yourself in five years?’ And I would say, ‘Well, at least as your boss!’ ”

That attitude surfaced in other forums as well. In high school in Argentina during the 1970s, he says, he persuaded classmates to open their own office supply store to compete with a store across the street from their school. He also declared his interest in left-leaning politics, which he said attracted the attention of the Argentine military junta that was purging high schools of dissidents. In the “dirty war” of 1976-83, the government killed thousands it suspected of being leftists.

An officer told the school to expel him, Mr. Varsavsky says, and he left for Brazil. Around the same time, he believes, his cousin was kidnapped and killed by the military. The Varsavsky family fled to the United States, and Mr. Varsavsky earned his undergraduate degree in economics and philosophy at New York University in 1981. He later attended Columbia University, where he received graduate degrees in international affairs and business administration.

MR. VARSAVSKY says start-ups got into his blood during graduate school, when he made his first million in a real estate foray: renovating and reselling lofts in New York.

After moving to Spain in the 1990s, he had three big telecommunications and Internet successes. He says that a $200,000 investment he made to start a long-distance company, Viatel, in 1990 was worth about $240 million when he cashed in his stake in 1999; that the 5 million euros he used to start Jazztel in 1997 has given him a stake now worth about 150 million euros; and that the 38 million euros he used to start a Spanish Internet service provider, Ya.com, in 1999 had grown to about 149 million euros when he sold the company the next year.

Then, after this first round of success, Mr. Varsavsky was hit with a loss that he describes as a striking, gut-wrenching failure. His German start-up EinsteinNet, founded in 2000 as an effort to sell software over a private fiber optic network, collapsed in 2003, leaving him with a personal loss of $50 million.

“I used the most money of my own in a company where I lost it all, and I consider it my business black eye,” he recalls, saying that he also drew a valuable lesson from the misadventure: “I don’t invest on my own. If other people don’t want to back me, it’s a sanity check.”

TO that end, Mr. Varsavsky has become a tireless networker, traveling the world to participate in a continuous parade of technology conferences and cultivating a global retinue of friends and contacts. He has also been active on the philanthropic front, earning kudos from a onetime resident of the White House.

“Martin represents the future of entrepreneurial culture and is helping to transform the way people give,” former President Bill Clinton says. “He has found different ways to use his acute business sense and creativity to improve our world and the lives of others.”

This month, Mr. Varsavsky brought together more than 70 Internet business people and technologists from Europe, Asia, Latin America and the United States for a conclave on his Menorca farm. Some guests represented the more than 20 digital enterprises in which he has a stake; others were “friends of Martin,” a loose-knit group that comprises his informal business network around the world.

The four-day conclave featured several unscripted “tech talks” in which entrepreneurs described problems they faced building their businesses. Participants included Lukasz Wejchert, the chief executive of Onet, Poland’s dominant Internet portal.

Deals with companies like Onet will be crucial if Mr. Varsavsky is to make good on his goal of having a million FON customers on each of three continents by 2010. The two companies recently came close to a deal, Mr. Wejchert says, but Onet decided that it was still to early for it to become an Internet service provider in Poland because the regulatory environment worked against new entrants.

That major players like Onet are beginning to find FON a potentially profitable partner is promising, and Mr. Varsavsky’s formidable networking abilities with politicians and entrepreneurs are also a plus. Ultimately, however, FON’s success will hinge on its strategic soundness and operational prowess — not on Mr. Varsavsky’s skills at working the cocktail circuit

He likes to refer to FON as a “revolution,” but so far his crusade has had difficulty gathering momentum because formal corporate alliances have been slow to jell.

In Mr. Varsavsky’s approach, FON’s business is subsidized by non-Foneros — passing Web surfers who buy time for access to the network — which he can then share with FON’s customers. The approach is different from that of Boingo, a Wi-Fi aggregator based in Los Angeles that charges users a monthly fee for using hotspots while they are traveling.

Yet both FON and Boingo have faced significant resistance from Internet service providers that carefully restrict access to their customers, leaving the idea of a seamless wireless Internet based on Wi-Fi technology an unfulfilled dream so far.

Mr. Varsavsky said he initially hoped that selling $30 Wi-Fi routers embedded with FON software would be all he needed to expand the ranks of Foneros around the globe. But this approach failed to gain traction fast enough, and he shifted gears. Now he is trying to steadily stack up distribution deals with I.S.P.’s.

While some I.S.P.’s have ignored his company, Mr. Varsavsky says FON has gained ground among I.S.P.’s that are looking for a way to attract new customers in competitive markets as well as to compete with high-speed wireless cellular networks.

FON now has a growing range of alliances, including ones with the BT Group, Neuf Cegetel in France, Livedoor (a Japanese I.S.P.), and Time Warner in the United States, as well as a recent agreement with the city of Geneva, which is distributing hundreds of FON routers to residents. Now strongest in Britain, France and Japan, FON has recently made progress with new agreements with two major Japanese retailers and a Taiwanese I.S.P. And Mr. Varsavsky said he is close to major agreements in India and Russia.

FON’s losses have shrunk from more than a million euros a month to less than 500,000, Mr. Varsavsky says. He also hasn’t given up his belief that a coming generation of wireless Internet technology will eventually give FON an even bigger boost.

The first generation of Wi-Fi technology was limited in range, making it impractical for Foneros to share their routers widely. But a new wireless technology, known as 802.16, which should be more widely available to consumers over the next two years, will offer far greater ranges.

This next generation of wireless communication, called WiMax by Intel and others, may allow him to complete his dream — in effect making it possible to weave together a wireless digital network in an urban area with nothing more than an army of Foneros willing to let their routers be used as micro cell towers.

“Why should anyone have to build their own towers?” he asks.

FON’s future, he argues, will revolve around universal access to the wireless Internet. In the meantime, he faces a big obstacle in one of the world’s most lucrative communications markets: the United States, where newer cellular networks with flat-rate pricing may prove a challenge because they will provide universal high-speed coverage.

In Europe, the Internet landscape looks more promising. The European Commission’s decision last summer to place a price cap on voice calls — to make cellphones more affordable for residents traveling within the European Union — didn’t include mobile data. Recent high-speed wireless networks introduced in Europe also use per-megabyte pricing, discouraging the streaming of large files like video.

That leaves a potentially big opportunity for a widely accessible sharing solution for travelers. Yet even in Europe, there are potential roadblocks, not the least of which has been a historically inhospitable atmosphere for entrepreneurial gambits.

“Europe has a larger market than the U.S.A., but it is culturally fragmented and risk-averse,” Mr. Varsavsky says. “But the differences are narrowing, and now there are European venture capitalists and a local entrepreneurial culture.”

Yet he remains undaunted when he discusses his unfinished revolution and FON’s prospects.

“FON,” he said, “is like a telephone company built by the people,” he said.

Monitor

The battle for Wikipedia's soul

Mar 6th 2008
From The Economist print edition

The internet: The popular online encyclopedia, written by volunteer contributors, has unlimited space. So does it matter if it includes trivia?

Illustration by Frazer Hudson

IT IS the biggest encyclopedia in history and the most successful example of “user-generated content” on the internet, with over 9m articles in 250 languages contributed by volunteers collaborating online. But Wikipedia is facing an identity crisis as it is torn between two alternative futures. It can either strive to encompass every aspect of human knowledge, no matter how trivial; or it can adopt a more stringent editorial policy and ban articles on trivial subjects, in the hope that this will enhance its reputation as a trustworthy and credible reference source. These two conflicting visions are at the heart of a bitter struggle inside Wikipedia between “inclusionists”, who believe that applying strict editorial criteria will dampen contributors' enthusiasm for the project, and “deletionists” who argue that Wikipedia should be more cautious and selective about its entries.

Consider the fictional characters of Pokémon, the Japanese game franchise with a huge global following, for example. Almost 500 of them have biographies on the English-language version of Wikipedia (the largest edition, with over 2m entries), with a level of detail that many real characters would envy. But search for biographies of the leaders of the Solidarity movement in Poland, and you would find no more than a dozen—and they are rather poorly edited.

Inclusionists believe that the disparity between Pokémon and Solidarity biographies would disappear by itself, if only Wikipedia loosened its relatively tight editorial control and allowed anyone to add articles about almost anything. They argue that since Wikipedia exists online, it should not have the space constraints of a physical encyclopedia imposed upon it artificially. (“Wikipedia is not paper”, runs one slogan of the inclusionists.)

Surely there is no harm, they argue, in including articles about characters from television programmes who only appear in a single episode, say? After all, since most people access Wikipedia pages via search, the inclusion of articles on niche topics will not inconvenience them. People will not be more inclined to create entries about Polish union leaders if the number of Pokémon entries is reduced from 500 to 200. The ideal Wikipedia of the inclusionists would feature as many articles on as many subjects as its contributors were able to produce, as long as they were of interest to more than just a few users.

Deletionists believe that Wikipedia will be more successful if it maintains a certain relevance and quality threshold for its entries. So their ideal Wikipedia might contain biographies of the five most important leaders of Solidarity, say, and the five most important Pokémon characters, but any more than that would dilute Wikipedia's quality and compromise the brand. The presence of so many articles on trivial subjects, they argue, makes it less likely that Wikipedia will be taken seriously, so articles dealing with trivial subjects should be deleted.

The rules of the game

In practice, deciding what is trivial and what is important is not easy. How do you draw editorial distinctions between an article entitled “List of nicknames used by George W. Bush” (status: kept) and one about “Vice-presidents who have shot people” (status: deleted)? Or how about “Natasha Demkina: Russian girl who claims to have X-ray vision” (status: kept) and “The role of clowns in modern society” (status: deleted)?

To measure a subject's worthiness for inclusion (or “notability”, in the jargon of Wikipedians), all kinds of rules have been devised. So an article in an international journal counts more than a mention in a local newspaper; ten matches on Google is better than one match; and so on. These rules are used to devise official policies on particular subjects, such as the notability of pornographic stars (a Playboy appearance earns you a Wikipedia mention; starring in a low-budget movie does not) or diplomats (permanent chiefs of station are notable, while chargés d'affaires ad interim are not).

Jimmy Wales, the founder of Wikipedia, has himself fallen foul of these tricky notability criteria. Last summer he created a short entry about a restaurant in South Africa where he had dined. The entry was promptly nominated for deletion, since the restaurant had a poor Google profile and was therefore considered not notable enough. After a lot of controversy and media coverage (which, ahem, increased the restaurant's notability), the entry was kept, but the episode prompted many questions about the adequacy of the editorial process.

As things stand, decisions whether to keep or delete articles are made after deliberations by Wikipedia's most ardent editors and administrators (the 1,000 or so most active Wikipedia contributors). Imagine you have just created a new entry, consisting of a few words. If a member of the Wikipedia elite believes that your submission fails to meet Wikipedia's notability criteria, it may be nominated for “speedy” deletion—in other words, removed right away—or “regular” deletion, which means the entry is removed after five days if nobody objects. (To avoid deletion or vandalism, many highly controversial articles, such as the entries on the Holocaust, Islam, terrorism or Mr Bush, can be “locked” to prevent editing or removal.)

If your article is selected for deletion, you may choose to contest the decision, in which case you may be asked to provide further information. There is also a higher authority with the ultimate power to rule in controversial cases: the Arbitration Committee, which settles disputes that the administrators cannot resolve.

Debates about the merits of articles often drag on for weeks, draining energy and taking up far more space than the entries themselves. Such deliberations involve volleys of arcane internal acronyms and references to obscure policies and guidelines, such as WP:APT (“Avoid Peacock Terms”—terms that merely promote the subject, without giving real information) and WP:MOSMAC (a set of guidelines for “Wikipedia articles discussing the Republic of Macedonia and the Province of Macedonia, Greece”). Covert alliances and intrigues are common. Sometimes editors resort to a practice called “sock puppetry”, in which one person creates lots of accounts and pretends to be several different people in a debate so as to create the illusion of support for a particular position.

The result is that novices can quickly get lost in Wikipedia's Kafkaesque bureaucracy. According to one estimate from 2006, entries about governance and editorial policies are one of the fastest-growing areas of the site and represent around one-quarter of its content. In some ways this is a sign of Wikipedia's maturity and importance: a project of this scale needs rules to govern how it works. But the proliferation of rules, and the fact that select Wikipedians have learnt how to handle them to win arguments, now represents a danger, says Andrew Lih, a former deletionist who is now an inclusionist, and who is writing a book about Wikipedia. The behaviour of Wikipedia's self-appointed deletionist guardians, who excise anything that does not meet their standards, justifying their actions with a blizzard of acronyms, is now known as “wiki-lawyering”.

Mr Lih and other inclusionists worry that this deters people from contributing to Wikipedia, and that the welcoming environment of Wikipedia's early days is giving way to hostility and infighting. There is already some evidence that the growth rate of Wikipedia's article-base is slowing. Unofficial data from October 2007 suggests that users' activity on the site is falling, when measured by the number of times an article is edited and the number of edits per month. The official figures have not been gathered and made public for almost a year, perhaps because they reveal some unpleasant truths about Wikipedia's health.

It may be that Wikipedians have already taken care of the “low-hanging fruit”, having compiled articles on the most obvious topics (though this could, again, be taken as evidence of Wikipedia's maturity). But there is a limit to how much information a group of predominantly non-specialist volunteers, armed with a search engine, can create and edit. Producing articles about specialist subjects such as Solidarity activists, as opposed to Pokémon characters, requires expert knowledge from contributors and editors. If the information is not available elsewhere on the web, its notability cannot be assessed using Google.

Illustration by Frazer Hudson

To create a new article on Wikipedia and be sure that it will survive, you need to be able to write a “deletionist-proof” entry and ensure that you have enough online backing (such as Google matches) to convince the increasingly picky Wikipedia people of its importance. This raises the threshold for writing articles so high that very few people actually do it. Many who are excited about contributing to the site end up on the “Missing Wikipedians” page: a constantly updated list of those who have decided to stop contributing. It serves as a reminder that frustration at having work removed prompts many people to abandon the project.

Google has recently announced its own entry into the field, in the form of an encyclopedia-like project called “Knol” that will allow anybody to create entries on topics of their choice, with a voting system that means the best rise to the top. Tellingly, this approach is based on individualism rather than collaboration (Google will share ad revenues with the authors). No doubt it will produce its own arguments and unexpected consequences. But even if it does not turn out to be the Wikipedia-killer that some people imagine, it may push Wikipedia to rethink its editorial stance.

In search of the perfect battery

Case history

In search of the perfect battery

Mar 6th 2008
From The Economist print edition

Energy technology: Researchers are desperate to find a modern-day philosopher's stone: the battery technology that will make electric cars practical. Here is a brief history of their quest

Getty Images

WHEN General Motors (GM) launched the EV1, a sleek electric vehicle, with much fanfare in 1996, it was supposed to herald a revolution: the start of the modern mass-production of electric cars. At the heart of the two-seater sat a massive 533kg lead-acid battery, providing the EV1 with a range of about 110km (70 miles). Many people who leased the car were enthusiastic, but its limited range, and the fact that it took many hours to recharge, among other reasons, convinced GM and other carmakers that had launched all-electric models to abandon their efforts a few years later.

Yet today about a dozen firms are once again developing all-electric or plug-in hybrid vehicles capable of running on batteries for short trips (and, in the case of plug-in hybrids, firing up an internal-combustion engine for longer trips). Toyota's popular Prius hybrid, by contrast, can travel less than a mile on battery power alone. Tesla Motors of San Carlos, California, recently delivered its first Roadster, an all-electric two-seater with a 450kg battery pack and a range of 350km (220 miles) between charges. And both Toyota and GM hope to start selling plug-in hybrids as soon as 2010.

So what has changed? Aside from growing concern about climate change and a surge in the oil price, the big difference is that battery technology is getting a lot better. Rechargeable lithium-ion batteries, which helped to make the mobile-phone revolution possible in the past decade, are now expected to power the increasing electrification of the car. “They are clearly the next step,” says Mary Ann Wright, the boss of Johnson Controls-Saft Advanced Power Solutions, a joint venture that recently opened a factory in France to produce lithium-ion batteries for hybrid vehicles.

According to Menahem Anderman, a consultant based in California who specialises in the automotive-battery market, more money is being spent on research into lithium-ion batteries than all other battery chemistries combined. A big market awaits the firms that manage to adapt lithium-ion batteries for cars. Between now and 2015, Dr Anderman estimates, the worldwide market for hybrid-vehicle batteries will more than triple, to $2.3 billion. Lithium-ion batteries, the first of which should appear in hybrid cars in 2009, could make up as much as half of that, he predicts.

Compared with other types of rechargeable-battery chemistry, the lithium-ion approach has many advantages. Besides being light, it does not suffer from any memory effect, which is the loss in capacity when a battery is recharged without being fully depleted. Once in mass production, large-scale lithium-ion technology is expected to become cheaper than its closest rival, the nickel-metal-hydride battery, which is found in the Prius and most other hybrid cars.

Still, the success of the lithium-ion battery is not assured. Its biggest weakness is probably its tendency to become unstable if it is overheated, overcharged or punctured. In 2006 Sony, a Japanese electronics giant, had to recall several million laptop batteries because of a manufacturing defect that caused some batteries to burst into flames. A faulty car battery which contains many times more stored energy could trigger a huge explosion—something no car company could afford. Performance, durability and tight costs for cars are also much more stringent than for small electronic devices. So the quest is under way for the refinements and improvements that will bring lithium-ion batteries up to scratch—and lead to their presence in millions of cars.

Alessandro Volta, an Italian physicist, invented the first battery in 1800. Since then a lot of new types have been developed, though all are based on the same principle: they exploit chemical reactions between different materials to store and deliver electrical energy.

Back to battery basics

A battery is made up of one or more cells. Each cell consists of a negative electrode and a positive electrode kept apart by a separator soaked in a conductive electrolyte that allows ions, but not electrons, to travel between them. When a battery is connected to a load, a chemical reaction begins. As positively charged ions travel from the negative to the positive electrode through the electrolyte, a proportional number of negatively charged electrons must make the same journey through an external circuit, resulting in an electric current that does useful work.

“Compared with computer chips, battery technology has improved very slowly over the years.”

Some batteries are based on an underlying chemical reaction that can be reversed. Such rechargeable batteries have an advantage, because they can be restored to their charged state by reversing the direction of the current flow that occurred during discharging. They can thus be reused hundreds or thousands of times. According to Joe Iorillo, an analyst at the Freedonia Group, rechargeable batteries make up almost two-thirds of the world's $56 billion battery market. Four different chemical reactions dominate the industry—each of which has pros and cons when it comes to utility, durability, cost, safety and performance.

The first rechargeable battery, the lead-acid battery, was invented in 1859 by Gaston Planté, a French physicist. The electrification of Europe and America in the late 19th century sparked the use of storage batteries for telegraphy, portable electric-lighting systems and back-up power. But the biggest market was probably electric cars. At the turn of the century battery-powered vehicles were a common sight on city streets, because they were quiet and did not emit any noxious fumes. But electric cars could not compete on range. In 1912 the electric self-starter, which replaced cranking by hand, meant that cars with internal-combustion engines left electric cars in the dust.

Nickel-cadmium cells came along around 1900 and were used in situations where more power was needed. As with lead-acid batteries, nickel-cadmium cells had a tendency to produce gases while in use, especially when being overcharged. In the late 1940s Georg Neumann, a German engineer, succeeded in fine-tuning the battery's chemistry to avoid this problem, making a sealed version possible. It started to become more widely available in the 1960s, powering devices such as electric razors and toothbrushes.

For most of the 20th century lead-acid and nickel-cadmium cells dominated the rechargeable-battery market, and both are still in use today. Although they cannot store as much energy for a given weight or volume as newer technologies, they can be extremely cost-effective. Small lead-acid battery packs provide short bursts of power to starter motors in virtually all cars; they are also used in large back-up power systems, and make up about half of the worldwide rechargeable-battery market. Nickel-cadmium batteries are used to provide emergency back-up power on planes and trains.

Time to change the batteries

In the past two decades two new rechargeable-battery types made their commercial debuts. Storing about twice as much energy as a lead-acid battery for a given weight, the nickel-metal-hydride battery appeared on the market in 1989. For much of the 1990s it was the battery of choice for powering portable electronic devices, displacing nickel-cadmium batteries in many applications. Toyota picked nickel-metal-hydride batteries for the new hybrid petrol-electric car it launched in 1997, the Prius.

Nickel-metal-hydride batteries evolved from the nickel-hydrogen batteries used to power satellites. Such batteries are expensive and bulky, since they require high-pressure hydrogen-storage tanks, but they offer high energy-density and last a long time, which makes them well suited for use in space. Nickel-metal-hydride batteries emerged as researchers looked for ways to store hydrogen in a more convenient form: within a hydrogen-absorbing metal alloy. Eventually Stanford Ovshinsky, an American inventor, and his company, now known as ECD Ovonics, succeeded in creating metal-hydride alloys with a disordered structure that improved performance.

Adapting the nickel-metal-hydride battery to the automotive environment was no small feat, since the way batteries have to work in hybrid cars is very different from the way they work in portable devices. Batteries in laptops and mobile phones are engineered to be discharged over the course of several hours or days, and they only need to last a couple of years. Hybrid-car batteries, on the other hand, are expected to work for eight to ten years and must endure hundreds of thousands of partial charge and discharge cycles as they absorb energy from regenerative braking or supply short bursts of power to aid in acceleration.

Lithium-ion batteries evolved from non-rechargeable lithium batteries, such as those used in watches and hearing aids. One reason lithium is particularly suitable for batteries is that it is the lightest metal, which means a lithium battery of a given weight can store more energy than one based on another metal (such as lead or nickel). Early rechargeable lithium batteries used pure lithium metal as the negative-electrode material, and an “intercalation” compound—a material with a lattice structure that could absorb lithium ions—as the positive electrode.

The problem with this design was that during recharging, the metallic lithium reformed unevenly at the negative electrode, creating spiky structures called “dendrites” that are unstable and reactive, and can pierce the separator and cause an explosion. So today's rechargeable lithium-ion batteries do not contain lithium in metallic form. Instead they use materials with lattice structures for both positive and negative electrodes. As the battery discharges, the lithium ions swim from the negative-electrode lattice to the positive one; during recharging, they swim back again. This to-and-fro approach is called a “rocking chair” design.

The first commercial lithium-ion battery, launched by Sony in 1991, was a rocking-chair design that used cobalt oxide for the positive electrode, and graphite (carbon) for the negative one. In the early 1990s, such batteries had an energy density of about 100 watt-hours per litre. Since then engineers have worked out ways to squeeze more than twice as much energy into a battery of the same size, in particular by reducing the width of the separator and increasing the amount of active electrode materials.

The high energy-density of lithium-ion batteries makes them the best technology for portable devices. According to Christophe Pillot of Avicenne Développement, a market-research firm based in Paris, they account for 70% of the $7 billion market for portable, rechargeable batteries. But not all lithium-ion batteries are alike. The host structures that accept lithium ions can be made using a variety of materials, explains Venkat Srinivasan, a scientist at America's Lawrence Berkeley National Laboratory. The combination of materials determines the characteristics of the battery, including its energy and power density, safety, longevity and cost. Because of this flexibility, researchers hope to develop new electrode materials that can increase the energy density of lithium-ion batteries by a factor of two or more in the future.

Hooked on lithium

The batteries commonly used in today's mobile phones and laptops still use cobalt oxide as the positive electrode. Such batteries are also starting to appear in cars, such as Tesla's Roadster. But since cobalt oxide is so reactive and costly, most experts deem it unsuitable for widespread use in hybrid or electric vehicles.

So researchers are trying other approaches. Some firms, such as Compact Power, based in Troy, Michigan, are developing batteries in which the cobalt is replaced by manganese, a material that is less expensive and more stable at high temperatures. Unfortunately, batteries with manganese-based electrodes store slightly less energy than cobalt-based ones, and also tend to have a shorter life, as manganese starts to dissolve into the electrolyte. But blending manganese with other elements, such as nickel and cobalt, can reduce these problems, says Michael Thackeray, a senior scientist at America's Argonne National Laboratory who holds several patents in this area.

In 1997 John Goodenough and his colleagues at the University of Texas published a paper in which they suggested using a new material for the positive electrode: iron phosphate. It promised to be cheaper, safer and more environmentally friendly than cobalt oxide. There were just two problems: it had a lower energy-density than cobalt oxide and suffered from low conductivity, limiting the rate at which energy could be delivered and stored by the battery. So when Yet-Ming Chiang of the Massachusetts Institute of Technology and his colleagues published a paper in 2002 in which they claimed to have dramatically boosted the material's conductivity by doping it with aluminium, niobium and zirconium, other researchers were impressed—though the exact mechanism that causes the increase in performance has since become the subject of a heated debate.

Dr Chiang's team published another paper in 2004 in which they described a way to increase performance further. Using iron-phosphate particles less than 100 nanometres across—about 100 times smaller than usual—increases the surface area of the electrode and improves the battery's ability to store and deliver energy. But again, the exact mechanism involved is somewhat controversial.

The iron-phosphate technology is being commercialised by several companies, including A123 Systems, co-founded by Dr Chiang, and Phostech Lithium, a Canadian firm that has been granted exclusive rights to manufacture and sell the material based on Dr Goodenough's patents. At the moment the two rivals are competing in the market, but their fate may be decided in court, since they are fighting a patent-infringement battle.

The quest for the perfect battery

Johnson Controls and Saft, which launched a joint venture in 2006, are taking a different approach, in which the positive electrode is made using a nickel-cobalt-aluminium-oxide. John Searle, the company's boss, says batteries made using its approach can last about 15 years. In 2007 Saft announced that Daimler had selected its batteries for use in a hybrid Mercedes saloon, due to go on sale in 2009. Other materials being investigated for use in future lithium-ion batteries include tin alloys and silicon.

Corbis Look, no exhaust pipe

At this point, it is hard to say which lithium-ion variation will prevail. Toyota, which is pursuing its own battery development with Matsushita, will not say which chemistry it favours. GM is also hedging its bets. The company is testing battery packs from both A123 Systems and Compact Power for the Chevy Volt (pictured), a forthcoming plug-in hybrid that will have an all-electric range of 40 miles and a small internal-combustion engine to recharge its battery when necessary. To ensure that the Volt's battery can always supply enough power and meet its targeted 10-year life-span, it will be kept between 30% and 80% charged at all times, says Roland Matthe of GM's energy-storage systems group.

GM hopes to start mass-production of the Volt in late 2010. That is ambitious, since the Volt's viability is dependent on the availability of a suitable battery technology. “It's either going to be a tremendous victory, or a terrible defeat,” says James George, a battery expert based in New Hampshire who has followed the industry for 45 years.

“We've still got a long way to go in terms of getting the ultimate battery,” says Dr Thackeray. Compared with computer chips, which have doubled in performance roughly every two years for decades, batteries have improved very slowly over their 200-year history. But high oil prices and concern over climate change mean there is now more of an incentive than ever for researchers to join the quest for better battery technologies. “It's going to be a journey”, says Ms Wright, “where we're going to be using the gas engine less and less.”