Thursday, October 23, 2008

Spain needs 100,000 qualified foreign workers, study finds.

Spain needs 100,000 qualified foreign workers, study finds.

AFP (10/23) reports, "Despite a slowing economy, Spain needs 100,000 qualified foreign workers per year until 2012 due to a shortage of IT, health and other professionals," according to a study from Etnia Comunicacion. "In total the country will need between 250,000 and 300,000 immigrants per year -- half the amount which has arrived annually in recent years -- if low-skilled workers are included." The report noted, "The shortage of highly qualified professionals in the technology sector, especially in the Internet area, as well as health professionals, engineers and consultants is starting to become urgent." The study cited "Spain's low birth rate and ageing reasons for the continued need for immigration." The findings come as the Spanish government plans "to slash the number of jobs on offer to foreigners recruited in their countries of origin, mostly in low-skilled areas like construction and the services sector." It also "reduced the total number of professions requiring foreign workers by 35 percent."

Oil prices continue to fall

Oil prices continue to fall.

The AP (10/23) reports, "Oil prices tumbled below $67 a barrel to 16-month lows Wednesday after the government reported big increases in U.S. fuel supplies -- more evidence that the economic downturn is drying up energy demand." According to the Energy Information Administration (EIA), "crude inventories jumped by 3.2 million barrels last week, above the 2.9 million barrel increase expected by analysts." Similarly, "gasoline inventories rose by 2.7 million barrels last week, and inventories of distillates, which include heating oil and diesel, rose by 2.2 million barrels." Jim Ritterbusch of Ritterbusch and Associates said, "As we begin to see evidence that demand is leveling...then we can start discussing a possible price bottom. But it appears premature at this point."

Another AP (10/23, Jahn) article reports, "With the global economy edging toward recession, the specter of diminishing supply has been blown away by a stunning lack of demand that now has the market in a stranglehold." Chakib Khelil, "Algeria's oil minister and OPEC's current president, speaks of a 'significant' reduction from the present daily output of around 29 million barrels, and expectations are that the group could pare up to 2 million barrels from that figure." While "cuts of that magnitude have in the past led to significant crude price hike," analysts say "there are signs that a significant upward trend may be short-lived with fear growing that the worst is yet to come for the global economy." Analyst Stephen Schork said that OPEC was "in a bit of panic," having "underestimated what happens when the bubble implodes." The Financial Times (10/23, Flood) also covers the story.

Thousands of Chinese factories close.

USA Today (10/22, Macleod, Wiseman) reports that "thousands of Chinese factories have shuttered in the past year." Among the factors are "an export-killing global slowdown" and "rising materials costs that have squeezed profit margins," as well as "a deliberate Chinese government campaign to regulate sweatshop factories out of business." Some analysts have argued "that China needs to keep annual economic growth of eight percent or nine percent to absorb the 24 million people entering the labor force every year or risk social instability." Current predictions peg China's growth at eight or nine percent in 2009. China also faces "collapsing home prices," which are expected to further hamper growth. "The good news: The forecast growth rates are still pretty impressive by any other economy's standards; Chinese exports have proved surprisingly resilient, growing nearly 22 percent in September from a year earlier; and the government in Beijing is sitting on enough go on a spending spree if needed to rescue the Chinese economy."

Report indicates that Chinese machine tool demand still growing. Plant Engineering Live (10/21) reported that "demand for machine tools in China is projected to increase 13 percent annually to 606 billion yuan in 2012, outpacing growth in most other parts of the world." According to a new study from the Freedonia Group, Inc., a Cleveland-based industry market research firm, "advances will be supported by rapid growth in durable goods production, especially for industrial machinery, transportation equipment, primary and fabricated metals, and electrical and electronics goods." The growth of China's infrastructure will also "result in increases in ceramic, glass, stone, and wood product demand, which will further contribute to machine tool market gains." Plant Engineering Live pointed out that "the report comes as the Chinese economic growth has slowed somewhat due to pressures from the global economic crisis. While Chinese manufacturing grew at a 9.9 percent rate in the first three quarters of 2008, that was still more than 2.3 percentage points slower than the same time last year."

Studie: Logistiker rechnen mit steigenden Kosten

Studie: Logistiker rechnen mit steigenden Kosten


Frank Straube plädiert für mehr Kostenbewustsein: 40 Prozent der Unternehmen kennen ihre Logistikkosten nicht (Bild: Bollig)

Berlin. „Kein Thema beschäftigt die Logistikmanager so sehr wie die Globalisierung“, sagte Frank Straube, Leiter des Bereichs Logistik am Institut für Technologie und Management der Technischen Universität Berlin. Dieses Ergebnis seiner aktuellen Studie „Trends und Strategien in der Logistik“ stellte Straube heute auf dem 25. Deutschen Logistikkongress in Berlin vor.

Die Globalisierung bleibt für Industrie-, Handels- und Dienstleistungsunternehmen demnach branchen- und sektorübergreifend einer der bestimmenden Trends in der Logistik. In der Industrie fühlen sich derzeit 60 Prozent und in Zukunft sogar 78 Prozent unmittelbar beeinflusst, sagte Straube. Die befragten Handelsunternehmen folgten mit derzeit 44 Prozent und zukünftig 65 Prozent. An dieser Entwicklung ändern auch Berichte über gescheiterte Internationalisierungsvorhaben einzelner Unternehmen mit Rückkehr nach Deutschland nichts. „Es gibt definitiv keine Trendumkehr der Globalisierung“, bekräftigte Raimund Klinkner, Vorsitzender des Vorstands der Bundesvereinigung Logistik.

Als weitere Megatrends hat die Studie den zunehmenden Umwelt- und Ressourcenschutz, die steigenden Sicherheitsanforderungen sowie Technologieinnovationen identifiziert. „Zuverlässigkeit ist für die befragten Unternehmen ein wesentliches Ziel – noch vor der Kostensenkung“, sagte Straube. Dabei gehe es primär darum, die Befriedigung der Kundenbedürfnisse umfassend sicherzustellen. Die Logistikmanager müssten allerdings mit steigenden Kosten rechnen, denn erstmals sei der langfristige Trend sinkender Kosten unterbrochen.

Getrieben werden die Logistikkosten vor allem durch steigende Energie-, Treibstoff- und Transportpreisen sowie hohe Personalaufwendungen. Den Anteil der Logistikkosten an den Gesamtkosten für das Jahr 2008 beziffert der Handel durchschnittlich mit 15,9 Prozent, die Industrie sieht diesen Anteil bei sieben Prozent. Für 2009 rechnet Straube mit um zehn Prozent wachsenden Logistikkosten. Der Professor der TU Berlin wies jedoch darauf hin, dass „40 Prozent der Unternehmen ihre Logistikkosten in großen Teilen nicht kennen“.

Die deutliche Mehrheit der befragten deutschen Logistikmanager schätzen den Logistikstandort Deutschland heute als wettbewerbsfähig an. Für 2015 zeigt die Studie dabei einen leichten Rückgang der Wettbewerbsfähigkeit. Dies begründet sich laut Straube in erster Linie aus der Unsicherheit, wie infrastrukturell das weiter steigende Transportaufkommen aufgefangen werden wird. US-amerikanische und chinesische Logistikmanager sind diesbezüglich optimistischer und schätzen den Logistikstandort Deutschland zukünftig mit 83 Prozent (USA) sowie 60 Prozent (China) wesentlich positiver ein. (sb)

Wikipedia and the Meaning of Truth

November/December 2008

Wikipedia and the Meaning of Truth

Why the online encyclopedia's epistemology should worry those who care about traditional notions of accuracy.

By Simson L. Garfinkel

With little notice from the outside world, the community-written encyclopedia Wikipedia has redefined the commonly accepted use of the word "truth."

Credit: Raymond Beisinger
Wikipedia's Reference Policy

Wikipedia's “No Original Research" Policy

Wikipedia's "Neutral Point of View" Policy Neutral_point_of_view

Wikipedia's Policy on Reliability of Sources

Wikipedia's Citation Policy

Why should we care? Because ­Wikipedia's articles are the first- or second-ranked results for most Internet searches. Type "iron" into Google, and Wikipedia's article on the element is the top-ranked result; likewise, its article on the Iron Cross is first when the search words are "iron cross." Google's search algorithms rank a story in part by how many times it has been linked to; people are linking to Wikipedia articles a lot.

This means that the content of these articles really matters. Wikipedia's standards of inclusion--what's in and what's not--affect the work of journalists, who routinely read Wikipedia articles and then repeat the wikiclaims as "background" without bothering to cite them. These standards affect students, whose research on many topics starts (and often ends) with Wikipedia. And since I used Wikipedia to research large parts of this article, these standards are affecting you, dear reader, at this very moment.

Many people, especially academic experts, have argued that Wikipedia's articles can't be trusted, because they are written and edited by volunteers who have never been vetted. Nevertheless, studies have found that the articles are remarkably accurate. The reason is that Wikipedia's community of more than seven million registered users has organically evolved a set of policies and procedures for removing untruths. This also explains Wikipedia's explosive growth: if the stuff in Wikipedia didn't seem "true enough" to most readers, they wouldn't keep coming back to the website.

These policies have become the social contract for Wikipedia's army of apparently insomniac volunteers. Thanks to them, incorrect information generally disappears quite quickly.

So how do the Wikipedians decide what's true and what's not? On what is their epistemology based?

Unlike the laws of mathematics or science, wikitruth isn't based on principles such as consistency or observa­bility. It's not even based on common sense or firsthand experience. Wikipedia has evolved a radically different set of epistemological standards--standards that aren't especially surprising given that the site is rooted in a Web-based community, but that should concern those of us who are interested in traditional notions of truth and accuracy. On Wikipedia, objective truth isn't all that important, actually. What makes a fact or statement fit for inclusion is that it appeared in some other publication--ideally, one that is in English and is available free online. "The threshold for inclusion in Wikipedia is verifiability, not truth," states Wikipedia's official policy on the subject.

Verifiability is one of Wikipedia's three core content policies; it was codified back in August 2003. The two others are "no original research" (December 2003) and "neutral point of view," which the Wikipedia project inherited from Nupedia, an earlier volunteer-written Web-based free encyclopedia that existed from March 2000 to September 2003 (Wikipedia's own NPOV policy was codified in December 2001). These policies have made Wikipedia a kind of academic agora where people on both sides of politically charged subjects can rationally discuss their positions, find common ground, and unemotionally document their differences. Wikipedia is successful because these policies have worked.

Unlike Wikipedia's articles, Nupedia's were written and vetted by experts. But few experts were motivated to contribute. Well, some wanted to write about their own research, but Larry Sanger, Nupedia's editor in chief, immediately put an end to that practice.

"I said, 'If it hasn't been vetted by the rele­vant experts, then basically we are setting ourselves up as a frontline source of new, original information, and we aren't set up to do that,'" Sanger (who is himself, ironically or not, a former philosophy instructor and by training an epistemologist) recalls telling his fellow Nupedians.

With experts barred from writing about their own work and having no incentive to write about anything else, Nupedia struggled. Then Sanger and Jimmy Wales, Nupedia's founder, decided to try a different policy on a new site, which they launched on January 15, 2001. They adopted the newly invented "wiki" technology, allowing anybody to contribute to any article--or create a new one--on any topic, simply by clicking "Edit this page."

Soon the promoters of oddball hypothe­ses and outlandish ideas were all over Wikipedia, causing the new site's volunteers to spend a good deal of time repairing damage--not all of it the innocent work of the misguided or deluded. (A study recently published in Communications of the Association for Computing Machinery found that 11 percent of Wikipedia articles have been vandalized at least once.) But how could Wikipedia's volunteer editors tell if something was true? The solution was to add references and footnotes to the articles, "not in order to help the reader, but in order to establish a point to the satisfaction of the [other] contributors," says Sanger, who left Wikipedia before the verifiability policy was formally adopted. (Sanger and Wales, now the chairman emeritus of the Wikimedia Foundation, fell out about the scale of Sanger's role in the creation of Wikipedia. Today, Sanger is the creator and editor in chief of Citizendium, an alternative to Wikipedia that is intended to address the inadequacy of its "reliability and quality.")

Verifiability is really an appeal to au­thority--not the authority of truth, but the authority of other publications. Any other publication, really. These days, information that's added to Wikipedia without an appropriate reference is likely to be slapped with a "citation needed" badge by one of Wikipedia's self-appointed editors. Remove the badge and somebody else will put it back. Keep it up and you might find yourself face to face with another kind of authority--one of the English-language Wikipedia's 1,500 administrators, who have the ability to place increasingly restrictive protections on contentious pages when the policies are ignored.

To be fair, Wikipedia's verifiability policy states that "articles should rely on reliable, third-party published sources" that themselves adhere to Wikipedia's NPOV policy. Self-published articles should generally be avoided, and non-English sources are discouraged if English articles are available, because many people who read, write, and edit En.Wikipedia (the English-language version) can read only English.

Mob Rules
In a May 2006 essay on the technology and culture website, futurist Jaron Lanier called Wikipedia an example of "digital Maoism"--the closest humanity has come to a functioning mob rule.

Lanier was moved to write about Wikipedia because someone kept editing his Wikipedia entry to say that he was a film director. Lanier describes himself as a "computer scientist, composer, visual artist, and author." He is good at all those things, but he is no director. According to his essay, he made one short experimental film in the 1990s, and it was "awful."

"I have attempted to retire from directing films in the alternative universe that is the Wikipedia a number of times, but somebody always overrules me," Lanier wrote. "Every time my Wikipedia entry is corrected, within a day I'm turned into a film director again."

Since Lanier's attempted edits to his own Wikipedia entry were based on firsthand knowledge of his own career, he was in direct violation of Wikipedia's three core policies. He has a point of view; he was writing on the basis of his own original research; and what he wrote couldn't be verified by following a link to some kind of legitimate, authoritative, and verifiable publication.

Wikipedia's standard for "truth" makes good technical and legal sense, given that anyone can edit its articles. There was no way for Wikipedia, as a community, to know whether the person revising the article about Jaron Lanier was really Jaron Lanier or a vandal. So it's safer not to take people at their word, and instead to require an appeal to the authority of another publication from everybody who contributes, expert or not.

An interesting thing happens when you try to understand Wikipedia: the deeper you go, the more convoluted it becomes. Consider the verifiability policy. Wikipedia considers the "most reliable sources" to be "peer-reviewed journals and books published in university presses," followed by "university-level textbooks," then magazines, journals, "books published by respected publishing houses," and finally "mainstream newspapers" (but not the opinion pages of newspapers).

Once again, this makes sense, given Wikipedia's inability to vet the real-world identities of authors. Lanier's complaints when his Wikipedia page claimed that he was a film director couldn't be taken seriously by Wikipedia's "contributors" until Lanier persuaded the editors at Edge to print his article bemoaning the claim. This Edge article by Lanier was enough to convince the Wikipedians that the Wikipedia article about Lanier was incorrect--after all, there was a clickable link! Presumably the editors at Edge did their fact checking, so the wikiworld could now be corrected.

As fate would have it, Lanier was subsequently criticized for engaging in the wikisin of editing his own wikientry. The same criticism was leveled against me when I corrected a number of obvious errors in my own Wikipedia entry.

"Criticism" is actually a mild word for the kind of wikijustice meted out to ­people who are foolish enough to get caught editing their own Wikipedia entries: the entries get slapped with a banner headline that says "A major contributor to this article, or its creator, may have a conflict of interest regarding its subject matter." The banner is accompanied by a little picture showing the scales of justice tilted to the left. Wikipedia's "Autobiography" policy explains in great detail how drawing on your own knowledge to edit the Wikipedia entry about yourself violates all three of the site's cornerstone policies--and illustrates the point with yet another appeal to authority, a quotation from The Hitchhiker's Guide to the Galaxy.

But there is a problem with appealing to the authority of other people's written words: many publications don't do any fact checking at all, and many of those that do simply call up the subject of the article and ask if the writer got the facts wrong or right. For instance, Dun and Bradstreet gets the information for its small-business information reports in part by asking those very same small businesses to fill out questionnaires about themselves.

"No Original Research"
What all this means is hard to say. I am infrequently troubled by Wiki's unreliability. (The quality of the writing is a different subject.) As a computer scientist, I find myself using Wikipedia on a daily basis. Its discussions of algorithms, architectures, microprocessors, and other technical subjects are generally excellent. When they aren't excellent and I know better, I just fix them. And when they're wrong and I don't know better--well, I don't know any better, do I?

I've also spent quite a bit of time reviewing Wikipedia's articles about such things as the "Singularity Scalpel," the "Treaty of Algeron," and "Number Six." Search for these terms and you'll be directed to Wikipedia articles with the titles "List of Torchwood items" and "List of treaties in Star Trek," and to one about a Cylon robot played by Canadian actress Tricia Helfer. These articles all hang their wikiexistence upon scholarly references to original episodes of Dr. Who, Torchwood, Star Trek, and Battlestar Galactica--popular television shows that the Wikipedia contributors dignify with the word "canon."

I enjoy using these articles as sticks to poke at Wikipedia, but they represent a tiny percentage of Wikipedia's overall content. On the other hand, they've been an important part of Wikipedia culture from the beginning. Sanger says that early on, Wikipedia made a commitment to having a wide variety of articles: "There's plenty of disk space, and as long as there are people out there who are able to write a decent article about a subject, why not let them? ... I thought it was kind of funny and cool that people were writing articles about every character in The Lord of the Rings. I didn't regard it as a problem the way some people do now."

What's wrong with the articles about fantastical worlds is that they are at odds with Wikipedia's "no original research" rule, since almost all of them draw their "references" from the fictions themselves and not from the allegedly more reliable secondary sources. I haven't nominated these ­articles for speedy deletion because Wikipedia makes an exception for fiction--and because, truth be told, I enjoy reading them. And these days, most such entries are labeled as referring to fictional universes.

So what is Truth? According to Wikipedia's entry on the subject, "the term has no single definition about which the majority of professional philosophers and scholars agree." But in practice, Wikipedia's standard for inclusion has become its de facto standard for truth, and since Wikipedia is the most widely read online reference on the planet, it's the standard of truth that most people are implicitly using when they type a search term into Google or Yahoo. On Wikipedia, truth is received truth: the consensus view of a subject.

That standard is simple: something is true if it was published in a newspaper article, a magazine or journal, or a book published by a university press--or if it appeared on Dr. Who.

Simson L. Garfinkel is a contributing editor to Technology Review and a professor of computer science at the Naval Postgraduate School in Monterey, CA.

Selectively Deleting Memories

Wednesday, October 22, 2008

Selectively Deleting Memories

Research in mice suggests that it might be possible to delete specific painful memories.

By Lauren Gravitz

Amping up a chemical in the mouse brain and then triggering the animal's recall can cause erasure of those, and only those, specific memories, according to research in the most recent issue of the journal Neuron. While the study was done in mice that were genetically modified to react to the chemical, the results suggest that it might one day be possible to develop a drug for eliminating specific, long-term memories, something that could be a boon for those suffering from debilitating phobias or post-traumatic stress disorder.

Credit: Technology Review

For more than two decades, researchers have been studying the chemical--a protein called alpha-CaM kinase II--for its role in learning and memory consolidation. To better understand the protein, a few years ago, Joe Tsien, a neurobiologist at the Medical College of Georgia, in Athens, created a mouse in which he could activate or inhibit sensitivity to alpha-CaM kinase II.

In the most recent results, Tsien found that when the mice recalled long-term memories while the protein was overexpressed in their brains, the combination appeared to selectively delete those memories. He and his collaborators first put the mice in a chamber where the animals heard a tone, then followed up the tone with a mild shock. The resulting associations: the chamber is a very bad place, and the tone foretells miserable things.

Then, a month later--enough time to ensure that the mice's long-term memory had been consolidated--the researchers placed the animals in a totally different chamber, overexpressed the protein, and played the tone. The mice showed no fear of the shock-associated sound. But these same mice, when placed in the original shock chamber, showed a classic fear response. Tsien had, in effect, erased one part of the memory (the one associated with the tone recall) while leaving the other intact.

"One thing that we're really intrigued by is that this is a selective erasure," Tsien says. "We know that erasure occurred very quickly, and was initiated by the recall itself."

Tsien notes that while the current methods can't be translated into the clinical setting, the work does identify a potential therapeutic approach. "Our work demonstrates that it's feasible to inducibly, selectively erase a memory," he says.

"The study is quite interesting from a number of points of view," says Mark Mayford, who studies the molecular basis of memory at the Scripps Research Institute, in La Jolla, CA. He notes that current treatments for memory "extinction" consist of very long-term therapy, in which patients are asked to recall fearful memories in safe situations, with the hope that the connection between the fear and the memory will gradually weaken.

"But people are very interested in devising a way where you could come up with a drug to expedite a way to do that," he says. That kind of treatment could change a memory by scrambling things up just in the neurons that are active during the specific act of the specific recollection. "That would be a very powerful thing," Mayford says.

But the puzzle is an incredibly complex one, and getting to that point will take a vast amount of additional research. "Human memory is so complicated, and we are just barely at the foot of the mountain," Tsien says.

Forgery-Proof RFID Tag

November/December 2008

Forgery-Proof RFID Tag

By TR Editors

A California company is selling RFID tags that would take a counterfeiter years to duplicate. Microscopic differences between the tags--a side effect of normal manufacturing techniques--mean that each will yield different answers in a set of test calculations. A forger would have to correctly predict the results of 16 billion billion such calculations to be sure of accurately simulating a single tag. The tags are being marketed as a way for manufacturers to authenticate brand-name luxury goods.

Product: Vera X512H

Cost: Around 12 cents, depending on volume


Company: Verayo

Color E-Paper Debuts

November/December 2008

Color E-Paper Debuts

By TR Editors

A waterproof MP3 player built for bright beach days is the first device with a color "e-paper" display, meaning it has no backlighting and thus can be read in direct sunlight. The display, from Qualcomm, consists of two layers of a reflective material. Some wavelengths of light bounce off the first layer; some pass through and bounce off the second. Interference between the two beams creates the color, and electrostatic forces control the distance between the layers.

Product: Freestyle Audio player

Cost: Around that of Freestyle's previous players, which range from $80 to $100


Company: Freestyle Audio, Qualcomm

Mind your businesses

Mind your businesses

Oct 22nd 2008

Corporate tax rates are falling

AS THE effects of the financial crisis ripple out into the wider economy, businesses are struggling. With access to credit all but choked off and global demand falling, firms are keen for any help they can get. America's big companies have a friend in John McCain, who says he will cut the top federal corporate tax rate from 35% to 25%. Once state and local taxes are added, the combined rate amounts to an average 40% of profits, the second highest in rich countries. Over the past decade, corporate-tax rates have fallen considerably, especially in the countries of the European Union.

Spreading the wealth

Income distribution

Spreading the wealth

Oct 21st 2008

Where the gap between rich and poor is the greatest

ANY mention of redistribution of wealth in America would normally scupper a politician's ambitions, but Barack Obama has managed to preserve his lead in the polls while also saying that he wants to “spread the wealth around”. And there is a lot of spreading potential: income distribution in America is the widest of the 30 countries of the OECD. The top 10% (or decile) of earners have an average $87,257 of disposable income, while those in the bottom decile have $5,819, among the very lowest of any country. Britain, Canada and Luxembourg also see big differences between the richest and poorest.


Wednesday, October 22, 2008

The Flaw at the Heart of the Internet

November/December 2008

The Flaw at the Heart of the Internet

Dan Kaminsky discovered a fundamental problem and got people to care in time. We were lucky this time.

By Erica Naone

Dan Kaminsky, uncharacteristically, was not looking for bugs earlier this year when he happened upon a flaw at the core of the Internet. The security researcher was using his knowledge of Internet infrastructure to come up with a better way to stream videos to users. Kaminsky's expertise is in the Internet's domain name system (DNS), the protocol responsible for matching websites' URLs with the numeric addresses of the servers that host them. The same content can be hosted by multiple servers with several addresses, and Kaminsky thought he had a great trick for directing users to the servers best able to handle their requests at any given moment.

Normally, DNS is reliable but not nimble. When a computer--say, a server that helps direct traffic across Comcast's network--requests the numerical address associated with a given URL, it stores the answer for a period of time known as "time to live," which can be anywhere from seconds to days. This helps to reduce the number of requests the server makes. Kaminsky's idea was to bypass the time to live, allowing the server to get a fresh answer every time it wanted to know a site's address. Consequently, traffic on Comcast's network would be sent to the optimal address at every moment, rather than to whatever address had already been stored. Kaminsky was sure that the strategy could significantly speed up content distribution.

It was only later, after talking casually about the idea with a friend, that Kaminsky realized his "trick" could completely break the security of the domain name system and, therefore, of the Internet itself. The time to live, it turns out, was at the core of DNS security; being able to bypass it allowed for a wide variety of attacks. Kaminsky wrote a little code to make sure the situation was as bad as he thought it was. "Once I saw it work, my stomach dropped," he says. "I thought, 'What the heck am I going to do about this? This affects everything.'"

Kaminsky's technique could be used to direct Web surfers to any Web page an attacker chose. The most obvious use is to send people to phishing sites (websites designed to trick people into entering banking passwords and other personal information, allowing an attacker to steal their identities) or other fake versions of Web pages. But the danger is even worse: protocols such as those used to deliver e-mail or for secure communications over the Internet ultimately rely on DNS. A creative attacker could use Kaminsky's technique to intercept sensitive e-mail, or to create forged versions of the certificates that ensure secure transactions between users and banking websites. "Every day I find another domino," ­Kaminsky says. "Another thing falls over if DNS is bad. ... I mean, literally, you look around and see anything that's using a network--anything that's using a network--and it's probably using DNS."

Kaminsky called Paul Vixie, president of the Internet Systems Consortium, a nonprofit corporation that supports several aspects of Internet infrastructure, including the software most commonly used in the domain name system. "Usually, if somebody wants to report a problem, you expect that it's going to take a fair amount of time for them to explain it--maybe a whiteboard, maybe a Word document or two," Vixie says. "In this case, it took 20 seconds for him to explain the problem, and another 20 seconds for him to answer my objections. After that, I said, 'Dan, I am speaking to you over an unsecure cell phone. Please do not ever say to anyone what you just said to me over an unsecure cell phone again.'"

photo View an outline of a cache-poisoning attack.

Perhaps most frightening was that because the vulnerability was not located in any particular hardware or software but in the design of the DNS protocol itself, it wasn't clear how to fix it. In secret, Kaminsky and Vixie gathered together some of the top DNS experts in the world: people from the U.S. government and high-level engineers from the major manufacturers of DNS software and hardware--companies that include Cisco and Microsoft. They arranged a meeting in March at Microsoft's campus in Redmond, WA. The arrangements were so secretive and rushed, Kaminsky says, that "there were people on jets to Microsoft who didn't even know what the bug was."

Once in Redmond, the group tried to determine the extent of the flaw and sort out a possible fix. They settled on a stopgap measure that fixed most problems, would be relatively easy to deploy, and would mask the exact nature of the flaw. Because attackers commonly identify security holes by reverse-engineering patches intended to fix them, the group decided that all its members had to release the patch simultaneously (the release date would turn out to be July 8). ­Kaminsky also asked security researchers not to publicly speculate on the details of the flaw for 30 days after the release of the patch, in an attempt to give companies enough time to secure their servers.

On August 6, at the Black Hat conference, the annual gathering of the world's Internet security experts, Kaminsky would publicly reveal what the flaw was and how it could be exploited.

Asking for Trouble
Kaminsky has not really discovered a new attack. Instead, he has found an ingenious way to breathe life into a very old one. Indeed, the basic flaw targeted by his attack predates the Internet itself.

The foundation of DNS was laid in 1983 by Paul ­Mockapetris, then at the University of Southern California, in the days of ­ARPAnet, the U.S. Defense Department research project that linked computers at a small number of universities and research institutions and ultimately led to the Internet. The system is designed to work like a telephone company's 411 service: given a name, it looks up the numbers that will lead to the bearer of that name. DNS became necessary as ARPAnet grew beyond an individual's ability to keep track of the numerical addresses in the network. Mockapetris, who is now chairman and chief scientist of Nominum, a provider of infrastructure software based in Redwood, CA, designed DNS as a hierarchy. When someone types the URL for a Web page into a browser or clicks on a hyperlink, a request goes to a name server maintained by the user's Internet service provider (ISP). The ISP's server stores the numerical addresses of URLs it handles frequently--at least, until their time to live expires. But if it can't find an address, it queries one of the 13 DNS root servers, which directs the request to a name server responsible for one of the top-level domains, such as .com or .edu. That server forwards the request to a server specific to a single domain name, such as or The forwarding continues through servers with ever more specific, or the request reaches a server that can either give the numerical address requested or respond that no such address exists. As the Internet matured, it became clear that DNS was not secure enough. The process of passing a request from one server to the next gives attackers many opportunities to intervene with false responses, and the system had no safeguards to ensure that the name server answering a request was trustworthy. As early as 1989, Mockapetris says, there were instances of "cache poisoning," in which a name server was tricked into storing false information about the numerical address associated with a website.
In the 1990s, the poisoner's job was relatively easy. The lower-level name servers are generally maintained by private entities: Amazon, for instance, controls the addresses supplied by the name server. If a low-level name server can't find a requested address, it will either refer the requester to another name server or tell the requester the page doesn't exist. But in the '90s, the low-level server could also furnish the requester with the top-level server's address. To poison a cache, an attacker simply had to falsify that information. If an attacker tricked, say, an ISP's name server into storing the wrong address for the .com server, it could hijack most of the traffic traveling over the ISP's network. Mockapetris says several features were subsequently added to DNS to protect the system. Requesting servers stopped accepting higher-level numerical addresses from lower-level name servers. But attackers found a way around that restriction. As before, they would refer a requester back to, say, the .com server. But now the requester had to look up the .com server's address on its own. It would request the address, and the attacker would race to respond with a forged reply before the real reply arrived. Ad hoc security measures were added to protect against this strategy, too. Now, each request to a DNS server carries a randomly generated transaction ID, one of 65,000 possible numbers, which the reply must contain as well. An attacker racing to beat a legitimate reply would also have to guess the correct transaction ID. Unfortunately, a computer can generate so many false replies so quickly that if it has enough chances, it's bound to find the correct ID. So the time to live, originally meant to keep name servers from being overburdened by too many requests, became yet another stopgap security feature. Because the requesting server will store an answer for some period of time, the attacker gets only a few chances to attempt a forgery. Most of the time, when the server needs a .com address, it consults its cache rather than checking with the .com server. Kaminsky found a way to bypass these ad hoc security features--most important, the time to live. That made the system just as vulnerable as it was when cache poisoning was first discovered. Using Kaminsky's technique, an attacker gets a nearly infinite number of chances to supply a forgery. Say an attacker wants to hijack all the e-mail that a social-­networking site like Facebook or MySpace sends to Gmail accounts. He signs up for an account with the social network, and when he's prompted for an e-mail address, he supplies one that points to a domain he controls. He begins to log on to the social network but claims to have forgotten his password. When the system tries to send a new password, it does a DNS lookup that leads to the attacker's domain. But the attacker's server claims that the requested address is invalid. At this point, the attacker could refer the requester to the name servers and race to supply a forged response. But then he would get only one shot at cracking the transaction ID. So instead, he refers the requester to the nonexistent domains, then, then, and so on, sending a flood of phony responses for each. Each time, the requesting server will consult Google's name servers rather than its cache, since it won't have stored addresses for any of the phony URLs. The attack completely bypasses the limits set by the time to live. One of the attacker's forgeries is bound to get through. Then it's a simple matter to direct anything the requesting server intends for Google to the attacker's own servers, since the attacker appears to have authority for URLs ending in Kaminsky says he was able to pull off test attacks in as little as 10 seconds.

A Cache-Poisoning Attack
Cache poisoning causes a requesting server to store false information about the numerical address associated with a website. A basic version of the attack--without some of the more sophisticated techniques Kaminsky employs--is outlined below.
1. To begin, the attacker lures the victim's server into contacting a domain the attacker controls. The attacker could, say, claim to have forgotten a password, prompting the victim to respond by e-mail.

2. The victim performs a DNS lookup to find out where to send the e-mail. But the attacker's name server refers the victim to another server, such as that of Since the attacker knows that the victim will now start a DNS lookup for that server, he or she has an opportunity to attempt to poison its cache.
3. The attacker tries to supply a false response before the legitimate server can supply the real one. If the attacker guesses the right ID number, the victim accepts the guess reply, which poisons the cache.

In the Dark
On July 8, Kaminsky held the promised press conference, announcing the release of the patch and asking other researchers not to speculate on the flaw. The hardware and software vendors had settled on a patch that forces an attacker to guess a longer transaction ID. Kaminsky says that before the patch, the attacker had to make tens of thousands of attempts to successfully poison a cache. After the patch, it would have to make billions. News of the flaw appeared in the New York Times, on the BBC's website, and in nearly every technical publication. Systems administrators scrambled to get the patch worked into their systems before they could be attacked. But because Kaminsky failed to provide details of the flaw, some members of the security community were skeptical. Thomas Ptacek, a researcher at ­Matasano Security, posted on Twitter: "Saying it here first: doubting there's really any meat to this DNS security announcement." Dino Dai Zovi, a security researcher best known for finding ways to deliver malware to a fully patched Macbook Pro, says, "I was definitely skeptical of the nature of the vulnerability, especially because of the amount of hype and attention versus the low amount of details. Whenever I see something like that, I instantly put on my skeptic hat, because it looks a lot like someone with a vested interest rather than someone trying to get something fixed." Dai Zovi and others noted that the timing was perfect to promote Kaminsky's Black Hat appearance, and they bristled at the request to refrain from speculation. The lack of information was particularly controversial because system administrators are often responsible for evaluating patches and deciding whether to apply them, weighing the danger of the security flaw against the disruption that the patch will cause. Because DNS is central to the operation of any Internet-dependent organization, altering it isn't something that's done lightly. To make matters worse, this patch didn't work properly with certain types of corporate firewalls. Many IT professionals expressed frustration at the lack of detail, saying that they were unable to properly evaluate the patch when so much remained hidden. Concerned by the skepticism about his claims, Kaminsky held a conference call with Ptacek and Dai Zovi, hoping to make them see how dangerous the bug was. Both came out of the call converted. But although Dai Zovi notes that much has changed since the time when hardware and software manufacturers dealt with flaws by simply denying that security researchers had identified real problems, he also says, "We don't know what to do when the vulnerabilities are in really big systems like DNS." Researchers face a dilemma, he says: they need to explain flaws in order to convince others of their severity, but a vulnerability like the one ­Kaminsky found is so serious that revealing its details might endanger the public. Halvar Flake, a German security researcher, was one observer who thought that keeping quiet was the more harmful alternative.Public speculation is just what's needed, he says, to help people understand what could hit them. Flake read a few basic materials, including the German Wikipedia entry on DNS, and wrote a blog entry about what he thought Kaminsky might have found. Declaring that his guess was probably wrong, he invited other researchers to correct him. Somehow, amid the commotion his post caused in the security community, a detailed explanation of the flaw appeared on a site hosted by Ptacek's employer, Matasano Security. The explanation was quickly taken down, but not before it had proliferated across the Internet. Chaos ensued. Kaminsky posted on Twitter, "DNS bug is public. You need to patch, or switch to [Web-based] OpenDNS, RIGHT NOW." Within days, Metasploit, a computer security project that designs sample attacks to aid in testing, released two modules exploiting Kaminsky's flaw. Shortly after, one of the first attacks based on the DNS flaw was seen in the wild. It took over some of AT&T's servers in order to present a false Google home page, loaded with the attacker's own ads.

Out of Cookies
Thirty minutes before Kaminsky took the stage at Black Hat to reveal the details of the flaw at last, people started to flood the ballroom at Caesar's Palace in Las Vegas. The speaker preceding Kaminsky hastened to wrap things up. Seats ran out, and people sat cross-legged on every square inch of carpet. Kaminsky's grandmother, who was sitting in the front row, had baked 250 cookies for the event. There were nowhere near enough. Kaminsky walked up to the podium. "There's a lot of people out there," he said. "Holy crap." Kaminsky is tall, and his gestures are a little awkward. As of early August, he said, more than 120 million broadband customers had been protected, as Internet service providers applied patches. Seventy percent of Fortune 500 companies had patched their systems, and an additional 15 percent were working on it. However, he added, 30 to 40 percent of name servers on the Internet were still unpatched and vulnerable to his 10-second cache-poisoning attack. Onstage, he flipped between gleeful description of his discovery's dark possibilities and attempts to muster the seriousness appropriate to their gravity. He spoke for 75 minutes, growing visibly lighter as he unburdened himself of seven months' worth of secrets. As he ended his talk, the crowd swept close to him, and he was whisked off by reporter after reporter. Even those security experts who agreed that the vulnera­bility was serious were taken aback by Kaminsky's eager embrace of the media attention and his relentless effort to publicize the flaw. Later that day, Kaminsky received the Pwnie award for "most overhyped bug" from a group of security researchers. (The word "pwn," which rhymes with "own," is Internet slang for "dominate completely." Kaminsky's award is subtitled "The Pwnie for ­pwning the media.") Dai Zovi, presenting the award, tried to list the publications that had carried Kaminsky's story. He gave up, saying, "What weren't you in?""GQ!" someone shouted from the audience. Kaminsky took the stage and spat out two sentences: "Some people find bugs; some people get bugs fixed. I'm happy to be in the second category." Swinging the award--a golden toy pony--by its bright pink hair, he stalked down the long aisle of the ballroom and out the door. Who's in Charge?
Depending on your perspective, the way Kaminsky handled the DNS flaw and its patch was either dangerous grandstanding that needlessly called public attention to the Internet vulnerability or--as Kaminsky sees it--a "media hack" necessary to train a spotlight on the bug's dangers. Either way, the story points to the troubling absence of any process for identifying and fixing critical flaws in the Internet. Because the Internet is so decentralized, there simply isn't a specific person or organization in charge of solving its problems.And though Kaminsky's flaw is especially serious, experts say it's probably not the only one in the Internet's infrastructure. Many Internet protocols weren't designed for the uses they're put to today; many of its security features were tacked on and don't address underlying vulnera­bilities. "Long-term, architecturally, we have to stop assuming the network is as friendly as it is," Kaminsky says. "We're just addicted to moving sensitive information across the Internet insecurely. We can do better." Indeed, at another security conference just days after Kaminsky's presentation at Black Hat, a team of researchers gave a talk illustrating serious flaws in the Internet's routing border gateway protocol. Like Kaminsky, the researchers had found problems with the fundamental design of an Internet protocol. Like the DNS flaw, the problem could allow an attacker to get broad access to sensitive traffic sent over the Internet.
Many experts say that what happened with the DNS flaw represents the best-case scenario. Mischel Kwon, director of US-CERT, a division of the Department of Homeland Security that helped get out the word about the DNS bug, hopes the network of organizations that worked together in this case will do the same if other flaws emerge. Though there's no hierarchy of authority in the private sector, Kwon says, there are strong connections between companies and organizations with the power to deploy patches. She says she is confident that, considering the money and effort being poured into improving security on the Internet, outdated protocols will be brought up to date. But that confidence isn't grounded in a well-considered strategy. What if ­Kaminsky hadn't had extensive connections within the security community or, worse, hadn't been committed to fixing the flaw in the first place? What if he had been a true "black hat" bent on exploiting the vulnerability he'd discovered? What if his seemingly skillful manipulation of the media had backfired, and the details of the flaw had become known before the patch was in place? What's more, even given the good intentions of researchers like Kaminsky, fixing basic flaws in the Internet isn't easy. Experts agree that the DNS problem is no exception. Several proposals are on the table for solving it by means more reliable than a patch, mostly by reducing the trust a requesting server accords a name server. Proposals range from relatively simple fixes, such as including even more random information in the requests made to name servers, to moving the entire system over to a set of protocols that would let name servers sign their responses cryptographically. In the meantime, both Kaminsky and Vixie say attackers have started to make use of the DNS flaw, and they expect more trouble to come. Kaminsky notes that the flaw becomes particularly dangerous when exploited along with other vulnerabilities. One such combination, he says, would allow an attacker to take over the automatic updates that a software vendor sends its customers, replacing them with malware. Kaminsky says he's spent the last several months on the phone to companies that would be attractive targets for that kind of attack, such as certificate authorities, social networks, and Internet service providers, trying to convince them to patch as soon as possible. "The scary thing," Dai Zovi says, "is how fragile [the Internet] is. ... And what are we going to do about it? " Erica Naone is an Assistant Editor at Tech­nology Review.

Superlenses for watching cells

Nicholas Fang, 33

University of Illinois at Urbana-Champaign

Superlenses for watching cells

Credit: Thomas Chadwick

View images of Fang’s superlenses and hear him discuss how they can improve microscopes.
Streaming Version
(8.89 Mb)
Full Version
(28.66 Mb)
For the best viewing experience you will need Adobe Reader 9.

The resolution of the best conventional light microscopes--which, unlike higher-resolution electron microscopes, can magnify living cells--is about 400 nanometers. That's good enough to let biologists tell cells apart, but it's not good enough to let them observe the workings of organelles within the cell, such as metabolizing mitochondria, which are about 200 nanometers across. Nicholas Fang hopes that within the next few years, his tech­nology will enable biologists to watch living cells at a resolution as fine as 15 nanometers (about the size of a protein molecule), revealing not only cell organelles but their molecular workings (see "Life Left in Light," September/October 2008).

Objects smaller than the wavelength of the light being shined onto them--several hundred nanometers, in the case of visible light--scatter the light as so-called evanescent waves. These waves move in such a way that they can't be collected and redirected by conventional lenses. But in 2005 Fang developed the first optical superlens--a device that can collect evanescent waves to soup up the performance of a light microscope.

At a small workbench in his lab, Fang stamps out nanoscale silver gratings that make it possible to convert conventional light-­microscope parts into superlenses. To pattern his metal structures onto fragile glass slides and other microscope parts, he starts by coating a coverslip with a thin film of silver. Separately, he carves a pattern--the inverse of the final, desired one--into a reusable stamp. He places the stamp over the cover­slip and applies an electrical voltage, causing a reaction in which the silver dissolves and is pulled into the crevices of the stamp. Once the stamp is removed, the silver coating of the coverslip is left with the grating pattern.

Using this method, Fang create­s intricate nanoscale patterns in about five minutes. The stamping doesn't break the delicate devices and doesn't need to be done in a clean room. And Fang says the process should be amenable to mass production of superlenses that could turn every biologist's microscope into a nanoscope. --Katherine Bourzac

Tuesday, October 21, 2008

Making the electric grid smart

Peter L. Corsell, 30


Making the electric grid smart

Credit: Abby Sternberg

Use the interactive graphic to learn more about Corsell’s work, and watch him discuss the importance of injecting intelligence into the grid.
Streaming Version
(3.59 Mb)
Full Version
(38.12 Mb)
For the best viewing experience you will need Adobe Reader 9.

In today's power grid, a steady but essentially passive flow of electricity links power plants, distribution systems, and consumers. It is a "dumb, inefficient system," says Peter L. Corsell, founder and CEO of GridPoint; in order to meet peak demand, power plants must be able to generate twice as much electricity as is typically needed. So Corsel­l has created energy management software that, combined with hardware from GridPoint and others, allows utilities to better balance power generation and elec­tricity demands, increasing both efficiency and reliability.

GridPoint's software allows consumers to use a personalized Web portal to set limits on electricity consumption. Using a small computer attached to a home's circuit box, utilities then measure energy consumption and control appliances such as water heaters and thermostats. "Consumers should be able to buy 74° and the utility company then sells them 74°," says Corsell. In addition to helping people conserve energy and reduce their bills, the system makes it simpler to integrate renewable energy sources such as solar cells and wind turbines into the grid.

Corsell has raised $102 million, and utilities will begin deploying the technology within the next year. For instance, Xcel Energy, a Minneapolis-based utility, has selected GridPoint's platform for its power grid project in Boulder, CO. Read why Corsell's thinks we need to apply information technology to the grid. --Brittany Sauser

Sun + Water = Fuel

November/December 2008

Sun + Water = Fuel

With catalysts created by an MIT chemist, sunlight can turn water into hydrogen. If the process can scale up, it could make solar power a dominant source of energy.

By Kevin Bullis

"I'm going to show you something I haven't showed anybody yet," said Daniel Nocera, a professor of chemistry at MIT, speaking this May to an auditorium filled with scientists and U.S. government energy officials. He asked the house manager to lower the lights. Then he started a video. "Can you see that?" he asked excitedly, pointing to the bubbles rising from a strip of material immersed in water. "Oxygen is pouring off of this electrode." Then he added, somewhat cryptically, "This is the future. We've got the leaf."

What Nocera was demonstrating was a reaction that generates oxygen from water much as green plants do during photosynthesis--an achievement that could have profound implications for the energy debate. Carried out with the help of a catalyst he developed, the reaction is the first and most difficult step in splitting water to make hydrogen gas. And efficiently generating hydrogen from water, Nocera believes, will help surmount one of the main obstacles preventing solar power from becoming a dominant source of electricity: there's no cost-effective way to store the energy collected by solar panels so that it can be used at night or during cloudy days.

Solar power has a unique potential to generate vast amounts of clean energy that doesn't contribute to global warming. But without a cheap means to store this energy, solar power can't replace fossil fuels on a large scale. In Nocera's scenario, sunlight would split water to produce versatile, easy-to-store hydrogen fuel that could later be burned in an internal-combustion generator or recombined with oxygen in a fuel cell. Even more ambitious, the reaction could be used to split seawater; in that case, running the hydrogen through a fuel cell would yield fresh water as well as electricity.

Storing energy from the sun by mimicking photosynthesis is something scientists have been trying to do since the early 1970s. In particular, they have tried to replicate the way green plants break down water. Chemists, of course, can already split water. But the process has required high temperatures, harsh alkaline solutions, or rare and expensive catalysts such as platinum. What Nocera has devised is an inexpensive catalyst that produces oxygen from water at room temperature and without caustic chemicals--the same benign conditions found in plants. Several other promising catalysts, including another that Nocera developed, could be used to complete the process and produce hydrogen gas.

Nocera sees two ways to take advantage of his breakthrough. In the first, a conventional solar panel would capture sunlight to produce electricity; in turn, that electricity would power a device called an electrolyzer, which would use his catalysts to split water. The second approach would employ a system that more closely mimics the structure of a leaf. The catalysts would be deployed side by side with special dye molecules designed to absorb sunlight; the energy captured by the dyes would drive the water-splitting reaction. Either way, solar energy would be converted into hydrogen fuel that could be easily stored and used at night--or whenever it's needed.

Nocera's audacious claims for the importance of his advance are the kind that academic chemists are usually loath to make in front of their peers. Indeed, a number of experts have questioned how well his system can be scaled up and how economical it will be. But Nocera shows no signs of backing down. "With this discovery, I totally change the dialogue," he told the audience in May. "All of the old arguments go out the window."

Leaf envy: MIT chemist Daniel Nocera has mimicked the step in photosynthesis in which green plants split water.
Credit: Christopher Harting

The Dark Side of Solar
Sunlight is the world's largest potential source of renewable energy, but that potential could easily go unrealized. Not only do solar panels not work at night, but daytime production waxes and wanes as clouds pass overhead. That's why today most solar panels--both those in solar farms built by utilities and those mounted on the roofs of houses and businesses--are connected to the electrical grid. During sunny days, when solar panels are operating at peak capacity, homeowners and companies can sell their excess power to utilities. But they generally have to rely on the grid at night, or when clouds shade the panels.

This system works only because solar power makes such a tiny contribution to overall electricity production: it meets a small fraction of 1 percent of total demand in the United States. As the contribution of solar power grows, its unreliability will become an increasingly serious problem.

If solar power grows enough to provide as little as 10 percent of total electricity, utilities will need to decide what to do when clouds move in during times of peak demand, says Ryan Wiser, a research scientist who studies electricity markets at Lawrence Berkeley National Laboratory in Berkeley, CA. Either utilities will need to operate extra natural-gas plants that can quickly ramp up to compensate for the lost power, or they'll need to invest in energy storage. The first option is currently cheaper, Wiser says: "Electrical storage is just too expensive."

But if we count on solar energy for more than about 20 percent of total electricity, he says, it will start to contribute to what's called base load power, the amount of power necessary to meet minimum demand. And base load power (which is now supplied mostly by coal-fired plants) must be provided at a relatively constant rate. Solar energy can't be harnessed for this purpose unless it can be stored on a large scale for use 24 hours a day, in good weather and bad.

In short, for solar to become a primary source of electricity, vast amounts of affordable storage will be needed. And today's options for storing electricity just aren't practical on a large enough scale, says Nathan Lewis, a professor of chemistry at Caltech. Take one of the least expensive methods: using electricity to pump water uphill and then running the water through a turbine to generate elec­tricity later on. One kilogram of water pumped up 100 meters stores about a kilojoule of energy. In comparison, a kilogram of gasoline stores about 45,000 kilojoules. Storing enough energy this way would require massive dams and huge reservoirs that would be emptied and filled every day. And try finding enough water for that in places such as Arizona and Nevada, where sunlight is particularly abundant.

Batteries, meanwhile, are expensive: they could add $10,000 to the cost of a typical home solar system. And although they're improving, they still store far less energy than fuels such as gasoline and hydrogen store in the form of chemical bonds. The best batteries store about 300 watt-hours of energy per kilogram, Lewis says, while gasoline stores 13,000 watt-hours per kilogram. "The numbers make it obvious that chemical fuels are the only energy-dense way to obtain massive energy storage," Lewis says. Of those fuels, not only is hydrogen potentially cleaner than gasoline, but by weight it stores much more energy--about three times as much, though it takes up more space because it's a gas.

The challenge lies in using energy from the sun to make such fuels cheaply and efficiently. This is where Nocera's efforts to mimic photosynthesis come in.

Photosynthesis in a beaker: In an experimental setup that duplicates the benign conditions found in photosynthetic plants, -Daniel ¬Nocera has demonstrated an easy and potentially cheap way to produce hydrogen gas. When a voltage is applied, cobalt and phosphate in solution (left) accumulate on an electrode to form a catalyst, which releases oxygen gas from the water as electrons flow out through the electrode. Hydrogen ions flow through a membrane; on the other side, hydrogen gas is produced by a nickel metal catalyst (Nocera has also used a platinum catalyst).
Credit: Bryan Christie

Imitating Plants
In real photosynthesis, green plants use chlorophyll to capture energy from sunlight and then use that energy to drive a series of complex chemical reactions that turn water and carbon dioxide into energy-rich carbohydrates such as starch and sugar. But what primarily interests many researchers is an early step in the process, in which a combination of proteins and inorganic catalysts helps break water efficiently into oxygen and hydrogen ions.

The field of artificial photosynthesis got off to a quick start. In the early 1970s, a graduate student at the University of Tokyo, Akira Fujishima, and his thesis advisor, Kenichi Honda, showed that electrodes made from titanium dioxide--a component of white paint--would slowly split water when exposed to light from a bright, 500-watt xenon lamp. The finding established that light could be used to split water outside of plants. In 1974, Thomas Meyer, a professor of chemistry at the University of North Caro­lina, Chapel Hill, showed that a ruthenium-based dye, when exposed to light, underwent chemical changes that gave it the potential to oxidize water, or pull electrons from it--the key first step in water splitting.

Ultimately, neither technique proved practical. The titanium dioxide couldn't absorb enough sunlight, and the light-induced chemical state in Meyer's dye was too transient to be useful. But the advances stimu­lated the imaginations of scientists. "You could look ahead and see where to go and, at least in principle, put the pieces together," Meyer says.

Over the next few decades, scientists studied the structures and materials in plants that absorb sunlight and store its energy. They found that plants carefully choreograph the movement of water molecules, electrons, and hydrogen ions--that is, protons. But much about the precise mechanisms involved remained unknown. Then, in 2004, researchers at Imperial College London identified the structure of a group of proteins and metals that is crucial for freeing oxygen from water in plants. They showed that the heart of this catalytic complex was a collection of proteins, oxygen atoms, and manganese and calcium ions that interact in specific ways.

"As soon as we saw this, we could start designing systems," says Nocera, who had been trying to fully understand the chemistry behind photosynthesis since 1984. Reading this "road map," he says, his group set out to manage protons and electrons somewhat the way plants do--but using only inorganic materials, which are more robust and stable than proteins.

Initially, Nocera didn't tackle the biggest challenge, pulling oxygen out from water. Rather, "to get our training wheels," he began with the reverse reaction: combining oxygen with protons and electrons to form water. He found that certain complex compounds based on cobalt were good catalysts for this reaction. So when it came time to try splitting water, he decided to use similar cobalt compounds.

Nocera knew that working with these compounds in water could be a problem, since cobalt can dissolve. Not surprisingly, he says, "within days we realized that cobalt was falling out of this elaborate compound that we made." With his initial attempts foiled, he decided to take a different approach. Instead of using a complex compound, he tested the catalytic activity of dissolved cobalt, with some phosphate added to the water to help the reaction. "We said, let's forget all the elaborate stuff and just use cobalt directly," he says.
Solar goes solo: Artificial photosynthesis could provide a practical way to store energy produced by solar power, freeing people’s homes from the electrical grid. In this scheme, electricity from solar panels powers an electrolyzer, which breaks water into hydrogen and oxygen. The hydrogen is stored; at night or on cloudy days, it is fed into a fuel cell to produce electricity for lights, appliances, and even electric cars. On sunny days, some of the solar power is used directly, bypassing the hydrogen production step.
Credit: Bryan Christie

The experiment worked better than Nocera and his colleagues had expected. When a current was applied to an electrode immersed in the solution, cobalt and phosphate accumulated on it in a thin film, and a dense layer of bubbles started forming in just a few minutes. Further tests confirmed that the bubbles were oxygen released by splitting the water. "Here's the luck," Nocera says. "There was no reason for us to expect that just plain cobalt with phosphate, versus cobalt being tied up in one of our complexes, would work this well. I couldn't have predicted it. The stuff that was falling out of the compounds turned out to be what we needed.

"Now we want to understand it," he continues. "I want to know why the hell cobalt in this thin film is so active. I may be able to improve it or use a different metal that's better." At the same time, he wants to start working with engineers to optimize the process and make an efficient water-splitting cell, one that incorporates catalysts for generating both oxygen and hydrogen. "We were really interested in the basic science. Can we make a catalyst that works efficiently under the conditions of photosynthesis?" he says. "The answer now is yes, we can do that. Now we've really got to get to the technology of designing a cell."

Catalyzing a Debate
Nocera's discovery has garnered a lot of attention, and not all of it has been flattering. Many chemists find his claims overstated; they don't dispute his findings, but they doubt that they will have the consequences he imagines. "The claim that this is the answer for artificial photosynthesis is crazy," says Thomas Meyer, who has been a mentor to Nocera. He says that while Nocera's catalysts "could prove technologically important," the advance is "a research finding," and there's "no guarantee that it can be scaled up or even made practical."

Many critics' objections revolve around the inability of ­Nocera's lab setup to split water nearly as rapidly as commercial electrolyzers do. The faster the system, the smaller a commercial unit that produced a given amount of hydrogen and oxygen would be. And smaller systems, in general, are cheaper.

The way to compare different catalysts is to look at their "current density"--that is, electrical current per square centimeter--when they're at their most efficient. The higher the current, the faster the catalyst can produce oxygen. Nocera reported results of 1 milliamp per square centimeter, although he says he's achieved 10 milliamps since then. Commercial electrolyzers typically run at about 1,000 milliamps per square centimeter. "At least what he's published so far would never work for a commercial electrolyzer, where the current density is 800 times to 2,000 times greater," says John Turner, a research fellow at the National Renewable Energy Laboratory in Golden, CO.

Other experts question the whole principle of converting sunlight into electricity, then into a chemical fuel, and then back into electricity again. They suggest that while batteries store far less energy than chemical fuels, they are nevertheless far more efficient, because using electricity to make fuels and then using the fuels to generate electricity wastes energy at every step. It would be better, they say, to focus on improving battery technology or other similar forms of electrical storage, rather than on developing water splitters and fuel cells. As Ryan Wiser puts it, "Electrolysis is [currently] inefficient, so why would you do it?"

The Artificial Leaf
Michael Grätzel, however, may have a clever way to turn Nocera's discovery to practical use. A professor of chemistry and chemical engineering at the École Polytechnique Fédérale in Lausanne, Switzerland, he was one of the first people Nocera told about his new catalyst. "He was so excited," Grätzel says. "He took me to a restaurant and bought a tremendously expensive bottle of wine."

In 1991, Grätzel invented a promising new type of solar cell. It uses a dye containing ruthenium, which acts much like the chlorophyll in a plant, absorbing light and releasing electrons. In ­Grätzel's solar cell, however, the electrons don't set off a water-splitting reaction. Instead, they're collected by a film of titanium dioxide and directed through an external circuit, generating electricity. Grätzel now thinks that he can integrate his solar cell and ­Nocera's catalyst into a single device that captures the energy from sunlight and uses it to split water.

If he's right, it would be a significant step toward making a device that, in many ways, truly resembles a leaf. The idea is that Grätzel's dye would take the place of the electrode on which the catalyst forms in Nocera's system. The dye itself, when exposed to light, can generate the voltage needed to assemble the catalyst. "The dye acts like a molecular wire that conducts charges away," Grätzel says. The catalyst then assembles where it's needed, right on the dye. Once the catalyst is formed, the sunlight absorbed by the dye drives the reactions that split water. Grätzel says that the device could be more efficient and cheaper than using a separate solar panel and electrolyzer.

Another possibility that Nocera is investigating is whether his catalyst can be used to split seawater. In initial tests, it performs well in the presence of salt, and he is now testing it to see how it handles other compounds found in the sea. If it works, Nocera's system could address more than just the energy crisis; it could help solve the world's growing shortage of fresh water as well.

Artificial leaves and fuel-producing desalination systems might sound like grandiose promises. But to many scientists, such possibilities seem maddeningly close; chemists seeking new energy technologies have been taunted for decades by the fact that plants easily use sunlight to turn abundant materials into energy-rich molecules. "We see it going on all around us, but it's something we can't really do," says Paul Alivisatos, a professor of chemistry and materials science at the University of California, Berkeley, who is leading an effort at Lawrence Berkeley National Laboratory to imitate photosynthesis by chemical means.

But soon, using nature's own blueprint, human beings could be using the sun "to make fuels from a glass of water," as Nocera puts it. That idea has an elegance that any chemist can appreciate--and possibilities that everyone should find hopeful.

Kevin Bullis is Technology Review's Energy Editor.

How Smart Is a Smart Card?

November/December 2008

How Smart Is a Smart Card?

By extracting the RFID chip from a smart card, it's possible to learn much about the algorithms that control it.

By Erica Naone

Waving a Smart card in front of a radio frequency identification (RFID) reader can provide access to buildings, pay for subway rides, and even initiate credit-card transactions. With more than a billion units sold, the NXP Mifare Classic RFID tag is the most commonly used smart-card chip; it can be found in the London subway system's Oyster card, Australia's SmartRider, and the Boston subway's ­Charlie Card. Security researcher Karsten Nohl, who recently got his PhD in computer science from the University of Virginia, and ­"Starbug," a member of a Berlin hacker group called the Chaos Computer Club, hacked into a Mifare Classic's hardware to gain insight into its cryptographic algorithms. After analyzing the chip, Nohl questioned its security in a series of presentations at recent conferences, including Black Hat in Las Vegas.

An Acetone Bath
Melting a smart card with acetone reveals an RFID chip within (visible in the lower right at the end of the video). The process takes about a half hour. After extracting the chip, a hacker can process it further to analyze its construction and programming.

Nohl and Starbug used acetone to peel the plastic off the card's millimeter-square chip. Once they isolated the chip, they embedded it in a block of plastic and sanded it down layer by layer to examine its construction. Nohl compares this to looking at the structure of a building floor by floor.

The chip has multiple layers that perform different functions, which the researchers had to tease apart in order to identify and understand its algorithms. Since the sanding technique didn't work perfectly, it produced a series of partial images. Nohl and Starbug borrowed techniques from panoramic photography to create a clear composite image of each layer. They identified six in all: a cover layer, three interconnection layers, a logic layer, and a transistor layer.

Images Credit: Christopher Harting; Interactive Credit: Alastair Halliday

Logic and Transistor Layers
A close look at these layers reveals about 10,000 groups of transistors, which execute the algorithms that run the chip. Nohl and Starbug's analysis revealed that each group performs one of 70 logic functions, and that the groups are repeated in different patterns.

Interconnection Layers
Several layers of metal between the protective cover layer and the logic layer provide the connections between the different logic functions. Wires running through them control the flow of current through the groups of transistors.

Logic Gates
The groups of transistors that perform the chip's logical operations are known as logic gates. By analyzing the pattern of logic gates on the chip, the researchers determined which circuits performed which functions; for example, a string of one-bit memory cells known as flip-flops pointed to the part of the chip responsible for cryptography. The researchers made a map of the logic gates and the connections between them, which allowed them to uncover the chip's cryptographic algorithm and determine that it was weak. Nohl says that RFID security could be improved by the use of stronger, peer-reviewed algorithms, along with measures to obscure or tamper-proof the circuit itself.