Saturday, June 07, 2008

Till the End Of Time

Soundtrack: Little Miss Sunshine
Til The End Of Time (Devotchka)

They’re just words, they ain’t worth nothing
Cloud your head and push your buttons
And watch how they just disappear
When we’re far away from here

And everybody knows where this is heading
Forgive me for forgetting
Our hearts irrevocably combined
Star-crossed souls slow dancing
Retreating and advancing
Across the sky until the end of time

Oh who put all those cares inside your head
You can’t live your life on your deathbed
And it’s been such a lovely day
Let’s not let it end this way

And everybody knows where this is heading
Forgive me for forgetting
Our hearts irrevocably combined
Star-crossed souls slow dancing
Retreating and advancing
Across the sky until the end of time

Like sisters and brothers we lean on each other
Like sweethearts carved on a headstone
Oh why even bother, it’ll be here tomorrow
It’s not worth it sleeping alone

And look at you and me still here together
There is no one knows you better
And we’ve come such a long long way
Let’s put it off for one more day

And everybody knows where this is heading
Forgive me for forgetting
Our hearts irrevocably combined
Star-crossed souls slow dancing
Retreating and advancing
Across the sky until the end of time

The Winner Is

Friday, June 06, 2008

Dow Slides Nearly 400 Points; Oil Surges

Dow Slides Nearly 400 Points; Oil Surges

Published: June 7, 2008

The markets opened lower on Friday and then just kept falling, hit by remarkable rise in the price of crude oil and a spike in the unemployment rate.

Spencer Platt/Getty Images

Traders working in the energy options pit on the floor of the New York Mercantile Exchange on Friday in New York City.

Wall Street suffered its worst losses in more than two months.

The Dow Jones industrials plunged nearly 400 points on fears about high energy prices and a continued economic slowdown, raw nerves that have pestered investors for months.

“The market is meeting its worst fears right now,” said Quincy Krosby, chief investment strategist at The Hartford, a financial services firm.

At the close, the Dow was off 3.13 percent, or 394.64 points, to 12,209.81. The broad-based Standard & Poor’s 500-stock index fell 43.37 points, or 3 percent, to 1,360.68, and the technology-laden Nasdaq composite index declined 75.38 points, or 2.96 percent, to 2,474.56. Shares opened lower after the government reported that the unemployment rate in May had its highest monthly increase in 22 years. But the decline accelerated as investors confronted a $10.75 jump in the price of crude oil, the biggest one-day climb ever.

“Oil prices have reached the tipping point,” said Richard Sparks, an analyst at Schaeffer’s Investment Research. “Prices have rallied for a good two months but now it’s really weighing on the market.”

Wall Street has run into choppy waters over the last two weeks after a period of relative calm. Friday’s decline marked a return to the triple-digit collapses of February and March, when the market was rocked by the Bear Stearns bailout and significant interest rate cuts from the Federal Reserve.

The last time the Dow fell this far was March 19, a day after the Fed slashed rates by three-quarters of a point.

On Friday, the blue-chip index was dragged down by shares of American International Group, the big insurer, which stumbled after accusations that the company may have overstated the value of contracts tied to subprime mortgages.

Shares of A.I.G. closed down $2.48, or nearly 6 percent, to an 11-year low of $33.93.

Shares of financial firms and companies that depend on discretionary spending were the hardest hit, as investors worried that the weak labor market is likely to raise anxieties among some Americans and put a pall on spending habits. Friday’s report from the Labor Department said that the economy lost jobs for the fifth straight month and the unemployment rate surged to 5.5 percent in May from 5 percent.

Investors are also worried that high energy prices will further slow the economy.

“If oil prices stay this high, you’re going to have to reexamine your estimates for G.D.P., inflation and consumers’ ability to spend outside of non-discretionary items,” Ms. Krosby said. “This has all of the elements of an investor’s worst-case scenario.”

Oil prices surged almost 8 percent, to $138.54 a barrel after a senior Israeli politician raised the specter of an attack on Iran and the dollar fell against the euro.

“As soon as that news hit the tape, oil spiked about $6,” said David Kovacs, an investment strategist at Turner Investment Partners.

Prices were buoyed further by a report from Morgan Stanley that predicted oil would reach $150 a barrel by July 4 because of higher demand in Asia. Shares of General Motors, whose fortunes can depend on oil prices, fell more than 4 percent, to a record low.

Mr. Sparks added the market is also taking a hit from a string of bad news that came out earlier this week: Standard & Poor’s downgrading of Lehman Brothers, Merrill Lynch and Morgan Stanley; the ousting of Wachovia’s chairman.

“All of this has culminated and it’s bringing the boogeyman back out of the closet,” he said.

Bond prices jumped on Friday as investors sought the safety of Treasuries in the volatile market. The dollar declined against other curre

Schwacher Dollar treibt Ölpreis auf Rekord

Der Ölpreis hat in Reaktion auf den Kursrutsch des Dollar ein Rekordhoch von über 139 $ pro Fass erreicht. Auch die israelische Drohung mit einem Angriff auf den Iran verunsichert die Märkte.

Im Handel der New Yorker Rohstoffbörse legte die Notierung für US-Leichtöl am Freitag zeitweise um mehr als zehn Dollar zu auf 139,01 Dollar. Das war der höchste Preis seit dem 22. Mai, als 135,09 Dollar erreicht wurden.

Als mitverantwortlich für die Preisjagd nannten Experten Aussagen des Präsidenten der Europäischen Zentralbank Jean-Claude Trichet. Er deutete am Donnerstag an, dass eine Zinserhöhung in der Eurozone nicht ausgeschlossen ist. Das setzte den Dollar unter Druck. Zudem verwiesen Experten auf die weiter hohe Nachfrage vor allem aus Schwellenländern wie China und Indien. In den kommenden zwei Jahren könnte der Ölpreis nach jüngsten Prognosen darum bis auf 200 Dollar je Fass steigen. Ein Analyst von Morgan Stanley erwartete einen Preis von 150 Dollar bis zum US-Unabhängigkeitstag am 4. Juli.
Sorte Brent bei 133 Dollar pro Fass

Auch das für die Versorgung in Europa wichtigen Nordseeöls der Sorte Brent verteuerte sich am Freitag drastisch. Es notierte in London zwischenzeitlich bei 133,19 Dollar je Fass.

„Wir haben einen extrem kräftigen Schub nach oben binnen kürzester Zeit gesehen“, sagte die Sprecherin des Hamburger Mineralölwirtschaftsverbandes, Barbara Meyer-Bukow. So etwas habe man nicht alle Tage. „Dieses hohe Niveau beim Ölpreis können wir trotz kürzerer Verschnaufpausen derzeit einfach nicht verlassen“, betonte auch der Chefredakteur des Energie-Informationsdienstes, Rainer Wiek. Öl sei ein Objekt von Spekulationen geworden.
Das Fass US-Leichtöl der Sorte WTI verteuerte sich am Freitag zeitweise um mehr als 11 $ auf bis zu 139,12 $. Damit hat sich der Ölpreis innerhalb von zwei Tagen um mehr als 16 $ erhöht. Das in Europa führende Nordseeöl der Sorte Brent kostete zum Wochenschluss 137,58 $ und damit knapp acht Prozent mehr als noch am Vortag. Analysten der US-Bank Morgan Stanley halten einen weiteren Anstieg des Ölpreises auf 150 $ pro Fass im Juli, wenn in den USA die Feriensaison beginnt, für möglich
Die Autofahrer in Deutschland müssen darum weiter mit dem teuren Sprit leben. Die Preise für Diesel und Benzin liegen nur knapp unter ihren Rekordständen. Ein Liter Super kostete am Freitag nach Informationen aus der Mineralölbranche im Schnitt 1,50 Euro, für Diesel wurden etwa 1,47 Euro je Liter fällig. Heizöl ist ebenfalls teuer und liegt im Schnitt bei einem Preis von 90,75 Euro für 100 Liter.

Mapping the Planets

Wednesday, June 04, 2008

Mapping the Planets

A new LIDAR system could provide more data on distant planets.

By Tim Barribeau

Researchers at the Rochester Institute of Technology (RIT) and MIT are developing a new generation of LIDAR (light detection and ranging) technology to map planetary bodies in more detail than ever before. These maps could help further our goals to explore outer space by providing more data about the geography and topography of the planet so that landing sites can be selected for future missions. The advanced LIDAR system could also be used to analyze the atmosphere on other planets to find out critical information about biohazards, wind speed, and temperature.

LIDAR works on a similar principle to radar, but through the use of lasers rather than radio waves. The laser is shot at an object, and the time delay between the pulse and the reflection is measured in order to accurately gauge the distance. The advantages of LIDAR over radar are twofold: LIDAR can be used to measure smaller objects, and it works on a greater variety of materials.

Professor Donald Figer and his team at the Rochester Imaging Detector Laboratory (RIDL), along with researchers at MIT's Lincoln Laboratory, have been awarded $547,000 in funding from NASA toward developing new light sensors. If their work is successful, the researchers could be awarded an additional $589,000 for fabrication and testing.

Testing in a vacuum: This helium-cooled vacuum device, known as a Dewar test system, will be used to determine the effectiveness of a new LIDAR system for mapping planets. LIDAR’s sensors are placed in the Dewar test system to create a carefully controlled environment in which to determine their efficacy and accuracy.
Credit: Rochester Institute of Technology, Rochester Imaging Detector Laboratory

The current LIDAR technology used by NASA has trouble distinguishing between objects with a height difference of less than one meter. With the new sensors, objects with differences down to one centimeter should be distinguishable.

The project focuses on the development of a low-power, continuous two-dimensional sensor array. Once the array is completed, the researchers hope that it will be able to capture data from a wide laser scan, in contrast to the current array, which gathers measurements using point-by-point readings. The pixel resolution of the scans is also greatly increased, from kilometers square to a few feet by a few feet. A prototype currently exists at Lincoln Laboratory. RIDL will soon begin evaluating the device while concurrently improving the design.

Right now, NASA is working on a different method of improving LIDAR for the Lunar Orbiter Laser Altimeter. LOLA is designed for the Lunar Reconnaissance Orbiter, which is scheduled for launch no earlier than November 2008. LOLA will provide a detailed topographic map of the moon's surface to increase surface mobility and exploration for lunar missions. Unlike what Figer and his group are doing, the LOLA improves resolution by having five lasers and five receivers working simultaneously. Figer's system uses one laser, but a beam expander will separate the beam, sending it off at a number of angles. Once the constituent beams are reflected off the objects being measured, the beams are recombined and then analyzed with the new sensors.

The reason for the increase in resolution is not an improvement in the laser itself, but a function of the increased scanning speed. Previously, LIDAR would only be able to scan point by point, so the amount of time required to generate a higher-resolution map was often prohibitive. With the new LIDAR's ability to split the laser beam and scan large areas of landscape at once, this time period is significantly reduced. "It would be impossible to take the single pixel maps to one foot and cover the planet," says Figer. "But if you have an imager, now things become more possible."

The improvement in measuring depth is attributable to a new generation of high-speed circuitry that is able to differentiate two signals arriving only 100 picoseconds apart, which equates to a centimeter in height.

Figer's faster system might also be better at mapping objects in motion. Due to the slower speed of the current technology, moving objects can appear multiple times in multiple scans, which makes it difficult to accurately reproduce a single point in time.

While the system is primarily designed for extraplanetary missions, Figer believes that it could be used in other ways. "Imagine," he says, "that you have this 3-D, 180-degree fish-eye system . . . in every city scanning continuously for biohazards."

A New Superconductor

Friday, June 06, 2008

A New Superconductor

Researchers investigate why iron arsenide materials become superconducting at relatively high temperatures.

By Prachi Patel-Predd

A new class of high-temperature superconductors, discovered earlier this year, behaves very differently than previously known copper-oxygen superconductors do. Instead, the new materials seem to follow a superconductivity mechanism found previously only in materials that are superconducting at very low temperatures, Chia-Ling Chien and his colleagues at Johns Hopkins University report in an online Nature paper.

The insight is an important step toward understanding how superconductors work, and it could help researchers design even better materials. High-temperature superconductors could lead to cheaper MRI machines; smaller, lighter power cables; and far more energy-efficient and secure power grids. Utilities, for example, could use superconducting magnets to store energy at night, and then use it at peak demand hours in the mornings and evenings.

Superconducting materials conduct electric current without any losses when they are chilled below a certain temperature, called the critical temperature. Niobium alloys, used to make superconducting magnets for MRI machines, are superconducting only below 10 K. Copper-oxygen compounds, or cuprates, which were discovered in the late 1980s, are superconducting at much higher temperatures of 90 to 138 K. At these temperatures, cheap, easy-to-use liquid nitrogen can be employed as a refrigerant. (Cuprates are not used for MRI magnets because it is difficult and expensive to make wires from them.) And some manufacturers are making nitrogen-cooled superconducting cables for transmission lines.

No resistance: New superconductors contain alternating layers of iron arsenide (orange and red) and rare earth metal oxides (blue and gray) doped with fluorine (green). Iron arsenide compounds become superconducting at relatively high temperatures of 55 K, and researchers are now beginning to decipher their superconducting mechanism.
Credit: Hideo Hosono, Tokyo Institute of Technology

But researchers have long tried to find materials with even higher critical temperatures. "The holy grail is operating [superconductors] at room temperature," says physicist Jeffrey Lynn, who studies superconductors at the National Institute of Standards and Technology. Superconducting power cables, MRI machines, and energy storage devices would be cheaper and smaller if they did not need cooling.

The new iron arsenide superconductors have shown potential for achieving high critical temperatures. Scientists at the Tokyo Institute of Technology first reported in a February paper in Journal of the American Chemical Society that a lanthanum iron arsenide material becomes superconducting at 26 K. Since then, Chinese researchers have pushed the critical temperature up to 55 K. That is not nearly as high as the superconducting temperatures for cuprates, but Johns Hopkins's Chien says that "this is a new material to explore, and one hopes we will get even higher temperatures."

The new material's chemical structure makes it particularly exciting. It contains oxides of rare earth metals sandwiched between layers of iron arsenide. The structure allows for a lot of tinkering that tweaks the material's properties, Lynn says. Researchers can, for instance, replace the iron, arsenic, or rare earth metals with other elements. In fact, Chinese researchers replaced the lanthanum in the original Japanese material with other rare earth metals, such as samarium, to raise the critical temperature above 50 K. "There are a lot of different types of chemical substitutions that you can try," Lynn says. "They're actually more flexible than cuprates."

The new superconductors could also have another crucial advantage, says David Christen, who leads superconductor research at Oak Ridge National Laboratory. While cuprate power cables have to be fabricated as specially designed flat tapes, it might be easier to make wires from iron arsenide semiconductors. "These materials could be more practical than cuprates if it turns out that they're easier and less expensive to make," Christen says.

Researchers are also hoping that iron arsenides will help unlock the mystery of how high-temperature superconductors work. That will be key for designing materials with even higher critical temperatures. In superconductors that work at very low temperatures, such as niobium and lead, electrons form pairs below the critical temperature. Atoms or defects in the crystal do not have the energy needed to break the pair and deflect the electrons. So the electron pair zips around the material unimpeded, giving rise to superconductivity. But this pairing theory does not hold for high-temperature copper-oxygen materials.

In their Nature paper, Chien and his colleagues show evidence suggesting that the pairing theory might hold for the iron arsenide superconductors. "The pairing of electrons is the soul of the superconductor," Chien says. "If the new materials follow the [pairing] theory, then . . . we will be able to understand the materials a little bit easier."

More evidence from experiments done with many different iron arsenide compounds will be needed to confirm how the superconductors work, says Pengcheng Dai, a physics professor at the University of Tennessee, in Knoxville. The Johns Hopkins work is "just one piece of the puzzle," he says. Indeed, while the pairing mechanism of iron arsenides might be different than that of copper-oxygen compounds, the two materials also have similarities. In a recent online paper, also published in Nature, Dai and Lynn showed that the two materials share key magnetic properties. And both materials also have a similar layered structure.

It might be too early to say just how useful the iron arsenide superconductors will be. For now, Dai says that researchers are excited about having broken the 22-year monopoly of cuprates and about having a new high-temperature superconductor to play with.

IBM developing miniature pipes of water for chip cooling

Thursday, June 05, 2008

IBM developing miniature pipes of water for chip cooling

By Associated Press

Since a computer microprocessor is veined with electric circuitry, it might seem like a bad place to put water. But IBM Corp. researchers believe that sloshing water through hair-thin pipes inside chips will solve a vexing problem facing next-generation computers.

That problem is heat.

As chips get smaller and smaller, cramming more processing power into ever-tinier spaces, the heat thrown off by the miniature circuits becomes harder to manage. Cooling measures used now to avoid chip meltdowns, including "heat sinks" made from heat-absorbing materials, might not work on tinier scales.

In fact, in a future microprocessor design IBM is exploring -- in which chips are stacked vertically to save space and enhance performance, rather than arrayed next to each other -- the heat-to-volume ratio exceeds that of a nuclear reactor.

To address that, IBM researchers say they could pipe water in between chips that are sandwiched together. The system, which IBM planned to explain Thursday at a technical conference, uses pipes that are just 50 microns wide -- 50 millionths of a meter. The tiny tubes are sealed to prevent leaks and electrical shorts.

Even these micro amounts of water can handle prodigious cooling chores, because water is much more efficient than air at absorbing heat. That is why some high-end computers long have used water cooling. The new trick here is that IBM expects to do it at the miniature scale, inside chips.

"It's never been applied this close to the heart of the matter," said analyst Richard Doherty of the Envisioneering Group.

Yogendra Joshi, an engineering professor at the Georgia Institute of Technology, said aspects of IBM's approach already have been shown by other researchers. But he said the company deserves credit for trying to push the idea toward commercialization.

"There has been a great aversion to piping liquids through electronics," Joshi said. "That's understandable."

However, IBM's tiny pipes aren't out of the lab yet. They're at least five years from becoming available.

Thursday, June 05, 2008

A Tiny Sensor Simply Made

Thursday, June 05, 2008

A Tiny Sensor Simply Made

A nanoscale biosensor that can detect minute amounts of pathogens could come to market this year.

Researchers at NASA Ames Research Center have developed a nanotechnology-based biosensor that can detect trace amounts of as many as 25 different microorganisms simultaneously and within minutes. The researchers make the biosensors by growing carbon nanofibers--a material with the same properties as carbon nanotubes but with a slightly larger diameter--using a process similar to the one employed to fabricate computer chips.

"By using the same reactor technology the semiconductor industry uses, we have created an innovative approach to manufacturing tiny sensors," say Meyya Meyyappan, the chief scientist for the biosensor project.

While NASA plans to eventually use the sensor to detect the presence of life on other planets, it has licensed the technology to Early Warning, a company based in Troy, NY, that develops systems to monitor biohazards. The company's president, Neil Gordon, says that the first application for the sensor will be for water-quality monitoring, and a prototype of the technology will be tested at a series of demonstration sites this summer. Early Warning plans to have a commercial product by the end of the year.

Detecting biohazards: A prototype of NASA’s nanotechnology-based biosensor has been licensed to Early Warning. The company is incorporating the technology into a device that will test for common and rare strains of microorganisms associated with waterborne illnesses in municipal water systems.
Credit: NASA

It has been known for almost a decade that carbon nanotubes and nanowires make good sensors, says Mark Reed, a professor of electrical engineering and applied physics at Yale University. But, says Reed, only in the past couple of years have research groups started to explore an integrated approach--electronics and biology--to build biosensing devices. Reed's group is among those using an integrated approach to build nanoscale sensors based on carbon nanowires. Harvard researchers led by chemist Charles Lieber were the first to show virus detection and the detection of early signs of cancer through semiconducting nanowires. Other academic groups, such as those at California Institute of Technology, the University of Southern California, and Boston College, are doing similar work.

Such biosensors, which are based on the detection of electrical signals, offer several advantages over more conventional optical technologies. For one thing, the electronic and electrochemical approaches do not require the use of florescent chemical tags, says Charlie Johnson, a professor in the department of physics and astronomy at the University of Pennsylvania. Electrical signals are also easier to measure than optical ones, he says.

NASA, however, is one of the few groups using carbon nanofibers to make biosensors. Carbon nanofibers are easier to work with than nanotubes are, and they can be grown on a silicon substrate in the exact structure that researchers desire, says Meyyappan.

Indeed, the real challenge to making electronic-based biosensors into products is not which material performs the best, but how they will be mass-produced, says Reed. "I am impressed by NASA's work, and they have very nice results," he says.

To make their sensors, NASA researchers start by coating a silicon wafer with a metal film like titanium or chromium. Next, the researchers deposit a catalyst of iron and nickel on top of the metal film, patterning the catalyst using conventional lithography. This allows the researchers to determine the location of the nanofibers, which will act as nanoelectrodes. A chemical vapor deposition process is used to grow the nanofibers on the catalyst.

"The proper construction and orientation of the nanoelectrode is critical for its electrochemical properties," says Meyyappan. "We want to grow the nanofibers in an array like telephone poles on the side of a highway--nicely aligned and vertical."

The researchers then place silicon dioxide in between the nanofibers so that they do not flap when they come in contact with fluids, like water and blood; this also isolates each nanoelectrode so that there is no cross talk. Excess silicon dioxide and part of the nanofibers are removed using a chemical mechanical polishing process so that only the tips of the carbon nanofibers are sticking out. The researchers can then attach a probe or molecule designed to bind the targeted biomolecule to the end of the nanofiber. The binding of the target to the probe generates an electrical signal.

The sensor is also equipped with conventional microfluidic technology--a series of pipes and valves--that will channel small drops of water over to specific probes on the biosensor side. This allows the researchers to do field testing and avoid the expense of taking the biosensor to the lab, says Meyyappan.

After the sensor is tested in its facilities this summer, Early Warning plans to place the device within an already existing wireless network to monitor the water quality of municipal systems. "The sensor gives us the advantage of having a lab-on-a-chip technology that can test for many different microorganisms in parallel," says Gordon. "And instead of waiting 48 hours for results, we get notified within 30 minutes if the water is contaminated," he says.

Such sensors could also be used in homeland security to detect pathogens such as anthrax, to detect viruses in air or food, and for medical diagnostics, says Meyyappan.

Wednesday, June 04, 2008

How to Prepare a Purchase Offer for a Business

How to Prepare a Purchase Offer for a Business

By eHow Business Editor

f you intend to make a serious offer to buy an existing business, it is always best to lay out all the details in a letter to the seller.

Things You’ll Need:

  • Attorney Referral Services
  • Business Services
  • Bonded Paper
  • Envelopes
State what you intend to buy - inventory, business name, equipment.
Detail the amount you will pay and any payment terms.
Include a noncompete clause, which forbids the seller from opening up a competing business in your locale within a given time frame.
Clearly spell out what information you still need in order to complete the transaction. Do you want to review the business's tax returns? Do you want proof that certain licenses and permits will be transferable? Do you need a copy of the property's lease agreement?
Be sure to mention that this is a good-faith offer that will be consummated within a given time period should both buying and selling parties agree.
Note at the end of the letter that this is not a binding agreement and awaits further review.

Tips & Warnings

  • Give the seller a modest deposit along with the purchase offer to show that you are serious about buying the business.
  • Have your lawyer go through your offer with a fine-tooth comb before you finalize the deal.

Monday, June 02, 2008

A Low-Cost Multitouch Screen

Thursday, May 29, 2008

A Low-Cost Multitouch Screen

A new system from Microsoft turns virtually any surface into a multitouch display.

By Kate Greene

The multitouch screen is certainly having its day in the sun. Apple's iPhone and iPod and Microsoft's touch-screen table, called Surface, all illustrate the concept in slick ways. And at a recent conference, Bill Gates and Steve Ballmer showed off Windows 7, a forthcoming operating system that supports multitouch. But the capabilities of today's multitouch software are still somewhat limited, and researchers and engineers aren't yet sure how best to exploit large displays. Recently, however, Microsoft introduced a new multitouch platform, called LaserTouch, which includes hardware that's cheap enough to retrofit any display into a touch screen. The software giant believes that by providing inexpensive multitouch hardware, researchers will be more inclined to experiment with different form factors and develop interesting software.

LaserTouch is a system built on the cheap: the hardware only costs a couple hundred dollars, excluding the display--which can be a plasma television or overhead projector, for instance--and the computer that runs the software. Unlike Surface, which uses a camera within the table to detect touch and a rear-projection system to create the images, LaserTouch uses a camera that's mounted on top of the display. Two infrared lasers, with beams spread wide, are affixed at the corners, essentially creating sheets of invisible light. When a person's finger touches the screen, it breaks the plane of light--an action that's detected by the camera above.

One of the main differences between Surface and LaserTouch, says Andy Wilson, one of Surface's developers, is that you can use LaserTouch on high-resolution displays. These displays lend themselves nicely to graphics applications, such as photo and video editing. And since LaserTouch can be fitted to any type of display, Wilson adds, it could be used for office applications such as presentations.

While multitouch interfaces have gotten a significant amount of attention recently, it's certainly not new technology. Researchers have been playing around with touch screens in labs for decades. But only when the iPhone illustrated a practical use for the technology did excitement build, says Scott Hudson, a professor of computer science at Carnegie Mellon University. "The iPhone has given [multitouch] a whole lot of visibility at present," he says. "I think it's reached the level of general public interest so that a lot of manufacturers are thinking that it has potential."

Cheap trick: Andy Wilson, a Microsoft researcher responsible for developing Surface, shows off LaserTouch, a low-cost multitouch system that can transform any display into a touch screen.
Credit: Microsoft

To be sure, Microsoft isn't the only company building large touch displays. Mitsubishi Electric Research Laboratories has built DiamondTouch, a touch table for business collaborations. Perceptive Pixel, a startup based in New York that was founded by Jeff Han, a research scientist at New York University, is currently selling giant, wall-sized touch screens that support multiple inputs. And kits that allow a person to assemble her own open-source touch-screen tables are currently available to the general public.

At a Microsoft Research demonstration last week, Wilson showed off some of the software that the company is trying to develop. Some of the more whimsical applications included a chess game that could be played with a virtual partner, and an application that lets people virtually "pick up" objects on the screen. (When a person makes a scooping gesture with his hand, a virtual hand appears on the screen, holding the object that was scooped.)

Wilson also demonstrated new presentation software designed for the touch screen that allowed him to easily flick through slides, resize objects, and navigate through components of his presentation. Presentations call out for touch-screen interaction, he says, because swooping gestures on a large screen provide theatrics that can make a talk engaging.

Microsoft has no plans to commercialize LaserTouch but still hopes that the approach can help spread the development of multitouch applications within the research community.

"It definitely helps everyone to make the hardware as cheap as possible, especially for larger form factors," says Han. However, he notes that the fidelity of the system needs to be maintained in order to make these new applications practical, and LaserTouch has the potential for errors. "It's quite easy for fingers and hands to block the sensing mechanism," Han says. "There's a real danger of vendors out there rushing to land-grab part of this hot multitouch space with substandard solutions which fall short of the potential of these interfaces."

The Future of Mobile Social Networking

Monday, June 02, 2008

The Future of Mobile Social Networking

IPhone users will soon be able to enjoy Whrrl, software that combines activity recommendations with real-time location data.

By Kate Greene

When Steve Jobs strides onstage at Apple's annual developers conference on June 9, many will be expecting fireworks. Some industry analysts think Jobs will announce an iPhone upgrade, one that takes advantage of faster networks and includes new hardware, perhaps a GPS receiver. Jobs is also expected to demonstrate some third-party iPhone applications, available in June, which could include games that use the phone's accelerometer as a control, new mapping software, and quick ways to update profiles on social networks such as Facebook or MySpace.

One rising company that's hoping for a mention during the Steve Jobs Show is Pelago, a startup that recently garnered $15 million from funders, including Kleiner Perkins Caufield and Byers. Pelago will soon offer a version of its software, called Whrrl, for the iPhone. The software enables something Pelago's chief technology officer, Darren Erik Vengroff, calls social discovery: using the iPhone's map and self-location features, as well as information about the prior activities of the user's friends, Whrrl proposes new places to explore or activities to try.

"If you think about your day-to-day life and how you discover things around you and places to go, to a great extent the source of that information is your friends," Vengroff says. With Whrrl, a user can "look through the eyes of friends and see the places they find compelling." The software begins with the user's position on the iPhone's map and indicates a smattering of nearby establishments. If the user's friends have visited and rated these places, the software indicates that as well. The map also shows the positions of nearby friends who have enabled a feature that lets them be seen by others.

Whrrl may turn out to be the leading edge of a wave of new location-based applications. "I think we're going to see a lot of new players showing up in this space," says Kurt Partridge, a research scientist at the Palo Alto Research Center who works on a similar project called Magitti. "Part of the reason," he says, "is the universal availability of GPS or access to location, which hasn't been available to application writers before." The iPhone and Nokia's N95 phone are two examples of phones that provide location data to computer programmers. Google's forthcoming Android mobile operating system may also help push location-based applications onto the market.

The idea of community-generated reviews is, of course, not new. The popular recommendation service Yelp, for example, is already integrated into Google Maps. And the concept of locating friends using a mobile phone has also been around for years; Loopt, a service that runs on Sprint and Boost Mobile phones, is one of the most common examples. Whrrl, which can also be downloaded onto BlackBerry Pearl, Curve, and Nokia N95 smart phones, is commonly compared to both types of service. But it differs from either in that it combines aspects of both. In addition, Vengroff explains, Whrrl has collected details on establishments in 17 cities, which allows the service to provide fine-tuned local search, letting the user narrow down the hunt for, say, a café to one that has outdoor seating and vegetarian options and is recommended by at least one friend.

Places to go: In June, the startup Pelago is expected to release a version of Whrrl, its mobile social-mapping software, for the iPhone. Whrrl helps people locate friends or find things to do nearby, and it incorporates recommendations made by others in the user’s social network. The image above is an artist's interpretation of what Whrrl might look like on an iPhone.
Credit: Technology Review

While the possibilities presented by Whrrl are exciting to many, its mass appeal has yet to be established. First, the location data might not be fine-grained enough to be useful in all cases, so it could lead to false positives. The iPhone relies on data from Skyhook Wireless, a company that uses an enormous database of the locations of Wi-Fi base stations to locate a person within about 30 meters; GPS, however, could do much better. Also, Whrrl is most useful when members of the user's social network actively contribute reviews. This requires that the user's friends have smart phones--and the motivation to critique the places they go.

Still, the biggest obstacle faced by services like Whrrl is privacy concerns. Vengroff points out that users control whom the program lists as their friends, who can read their reviews, and who can see their physical locations. The software also offers a "cloaking" feature that lets a person become completely invisible to his or her entire Whrrl network.

"Generally, if you give people more control, they're more willing to participate," says Tanzeem Choudhury, a professor of computer science at Dartmouth College. However, some people are still concerned about how long the company will store information about its customers' locations. Choudhury says that these first-generation services will likely be used by small groups of early adopters who are more aware than most of potential privacy risks and will push companies to confront them.

Regardless, Choudhury and others are excited about the potential of services such as Whrrl. In the future, she suspects, location-based services will include more predictive features. For instance, instead of explicitly requiring you to write a review, the software might recognize how often you visit a restaurant and infer that it is a favorite. "Eventually, I think that a whole lot of exciting technology will emerge that figures out how to reduce the burden on the user," Choudhury says. "There will always be the case where user input will be important, but when we find the sweet spot, that's when I think it will take off."

Phoenix Landing

Phoenix Landing

Mars spacecraft successfully reaches red planet's arctic.
Tuesday, May 27, 2008
By Kristina Grifantini

The Phoenix Mars Mission, which launched last August, completed its roughly 423-million-mile journey on Sunday night, with a successful landing on Mars's polar cap. After a smooth landing, the 18-foot-long lander has relayed back 100-some images of its surroundings, as well as of itself and its shadow.

There have been many spacecraft sent to Mars, but most have failed; the last attempt to land on a Martian polar cap failed in 1999, when Mars Polar Lander lost communication as it descended to the planet.

Phoenix will look for signs of past habitability by trying to characterize the history of water on Mars and by looking for biological components in the soil. The spacecraft, powered by solar arrays, will use a nearly eight-foot-long arm to dig about a foot and a half beneath the surface, where scientists expect (based on data from the Mars Odyssey orbiter) vast sheets of ice to lie. An impressive array of cameras, sensors, and microscopes will monitor the environment and analyze the icy soil samples. Phoenix has roughly three months to complete its mission, before Martian winter sets in, freezing the spacecraft in solid carbon dioxide.

Even the Poorest Can Be a Thriving Market

Even the Poorest Can Be a Thriving Market

by Jean-Louis Warnholz

The idea that global companies can do good and do well at the “bottom of the pyramid”—that is, among the poor populations of developing countries—has generated excitement among corporations, governments, and NGOs in recent years. But most of the resulting initiatives by multinationals have missed the very poor, the 2 billion people in places like Haiti and Bangladesh who live on less than two dollars a day and have been virtually ignored by the corporate world and cut off from the global marketplace. The multinationals seem not to have noticed the examples of Telenor and Digicel, innovative mobile phone companies that have found opportunities to earn profits and simultaneously improve local economic landscapes by serving the very poor.

Telenor was drawn to Bangladesh and Pakistan, and Digicel to Haiti, by low-wage workforces and the potential for creating local consumer markets, despite endemic poverty. Both companies refused to accept the low-purchasing-power status quo and have been systematically building up local consumer markets. They are now boosting economic growth by generating jobs, tax revenue, and investment.

Their success should come as no surprise. Indeed, the argument that companies can improve poor economies while making profits by selling consumer goods was put forth by C.K. Prahalad and Allen Hammond in their article “Serving the World’s Poor, Profitably” (HBR September 2002) and by Prahalad in The Fortune at the Bottom of the Pyramid (Wharton School Publishing, 2004). Since the publication of the article and the book, multinationals have begun “creating the capacity to consume” among the poor. In India, for example, Procter & Gamble sells single-use sachets of detergent and shampoo that are affordable for the poor. But the vast majority of such efforts have been aimed at urban consumers in India, China, and South America who are just below the middle class. Multinationals can learn a great deal from Telenor’s and Digicel’s creative ways of generating purchasing power among consumers with much lower income.

Telenor’s joint venture with Grameen Telecom in Bangladesh has several programs aimed at doing this, including one that allows people without bank accounts to pay utility and other bills via mobile phone. In Pakistan, Telenor offers would-be entrepreneurs in impoverished remote areas its “business in a box” solution: a subsidized phone plus training. And Digicel allows phone cards to be recharged from abroad, enabling people in the Haitian diaspora to help relatives pay for their phones, many of which are used in local businesses.

Both Norway-based Telenor and Jamaica-based Digicel have fared well: Telenor’s Grameenphone joint venture, which has been doing business in Bangladesh since 1997, became profitable in 2000 and is now the country’s largest telecom firm. Telenor Pakistan, a more recent initiative, increased its revenues 265% in 2007 and saw a nearly 200% jump in its customer base, to 15 million. Digicel doesn’t break down its profits by country, but Haiti represents the company’s largest market, and the corporation’s profits doubled to roughly $450 million for the year ending March 2008. The phenomenal growth in all three markets suggests significant improvements in local purchasing power. In Bangladesh, another indicator of increased purchasing power is a recent decline in the profits of the 280,000 “phone ladies,” who offer access to Telenor’s services, as more and more people in remote villages can now afford their own phones.

Perceived high risk and a difficult business environment have prevented capital from flowing into the world’s poorest countries, which also include many of the nations of sub-Saharan Africa. But several studies and my own experience with officials of poor countries indicate that once investment comes in, the business environment can improve quickly. Haiti, for example, has established a new Investment Facilitation Center, a one-stop window for traders and investors. And there are factors that offset the risks, such as low labor costs, abundant resources, and highly preferential trade agreements with developed countries. Opportunities are waiting for companies that—like Telenor and Digicel—strive to do good and do well in every country, no matter how poor.

The Global Food Crisis: Facts and Opportunities

The Global Food Crisis: Facts and Opportunities

There can be little doubt that we are faced with an unprecedented food crisis. The media has covered it extensively. All major leaders have expressed an opinion. A common and perplexing theme running across all these is the premise that increased prosperity and consumption in countries like India is a major cause for the catastrophe. Rhetoric has replaced reality, style is scoring over substance. What are the facts?

· The Food and Agricultural Organization (FAO) has reported that the cereal intake in India in 2007-08 was 197.8 million tons compared to 193.1 million tons for the previous year, representing an increase of just 2.17%. In contrast, the US accounted for 310.4 million tons of consumption in 2007-08 against 277.6 million tons the previous year, representing an increase of 11.8%. The world average itself increased by a modest 2.06%. Perhaps more importantly, while in Asia and Africa, there was a slight increase in consumption, there was a corresponding increase in production as well. In the US, while consumption increased by 11.8%, production in fact declined.

· Equally revealing are figures from the US Department of Agriculture. The per capita consumption of grain, milk, and vegetable oils has been reported to be 2300, 172, and 90 pounds respectively for the US. The figures for India are 392, 79 and 24 pounds respectively. The consumption statistics for meat products provides an even more striking contrast. The US accounts for per capita consumption of 94 pounds of beef, 100 pounds of poultry, and 65 pounds of pork. The corresponding figures for India are 3.5 pounds, 4.2 pounds, and negligible. Even after considering the differences in population, the figures are quite staggering.

Given these facts (One by a UN agency and the other by a department of the US government), it is rather difficult to accept the notion that increased prosperity and consumption in India have a significant impact on global food prices.

Amidst all the points and counterpoints, what is lost sight of is the plight of a billion people who live on $1 a day and an equal number who subsist on $2 a day. They are the most vulnerable and yet no one seems to care for them. After all, they don’t vote governments to power. Already the first signs of discontent, strife, and riots are visible. What next? Thanks to lop-sided measures and knee-jerk reactions by several countries, the price of rice has spiked 120% in 2008 alone.

Any crisis also represents an opportunity. The present food crisis is no exception. While there would inevitably be some pain in the short term, prudent measures taken now can avoid a repetition of the scenario in the future. A few suggestions are given below:

· We need to protect the available farm land and invest in improving agricultural productivity. Productivity increases are close to zero in many cases. In many countries, land holdings of small farmers have fallen below one hectare. There is an urgent need for cooperative farming. The notion of a green revolution needs to be turned into an ever green revolution.

· While no one can dispute the need to develop alternatives for fossil fuels, rapid substitution of farm land to crops suitable for bio-fuels needs to be approached with caution.

· Protecting the most vulnerable (nearly a third of the world’s population) must assume the highest priority. A safety net in the form of buffer stocks that can be distributed at affordable prices seems to be the only way out. The World Food Program (WFP) must be funded based on GDP or per capita income parameters.

· Governments would do well do stop meddling in food markets. Interventions in any form – subsidies or controls – tend to damage the entire food supply chain on a global scale.

· Finally, as humans inhabiting this fragile planet, we need to work together. Given the collective will of humanity, no problem is insurmountable. Cooperation is the key – not blaming each other.

New Microsoft operating system to have touch-screen feature

Wednesday, May 28, 2008

New Microsoft operating system to have touch-screen feature

By Associated Press

Microsoft Corp. said Tuesday that its next operating system will be made for touch-screen applications, an alternative to the computer mouse.

Microsoft Chief Executive Officer Steve Ballmer unveiled the iPhone-like touch-screen feature at The Wall Street Journal's ''D: All Things Digital'' conference, calling it ''just the smallest snippet'' of the Windows 7 operating system slated for release in late 2009.

A Microsoft employee showed possible applications like enlarging and shrinking photos and navigating a map of San Diego by stroking the screen.

Microsoft Chairman Bill Gates framed the new feature as an evolution away from the mouse.

''Today almost all the interaction is keyboard-mouse,'' Gates said. ''Over years to come, the role of speech, vision, ink -- all of those -- will be huge.''

The software company's top two executives defended its last operating system, Vista, while acknowledging missteps. Gates said he has never been 100 percent satisfied with any Microsoft product, and that the company prides itself on fixing shortcomings in later versions.

''Vista has given more opportunity to exercise our culture than some products,'' he deadpanned.

The former Harvard University classmates fielded a range of questions for more than an hour, sharing the stage as Gates prepares to relinquish daily responsibilities at the company in July to focus more on philanthropic work.

Ballmer said Microsoft remained in discussions to team up with Yahoo Inc. after Microsoft's $47.5 billion (euro30.14 billion) bid for the company was spurned earlier this month. He said Microsoft was not planning to buy Yahoo but offered only the barest details of what he has in mind.

''We are not rebidding for the company. We reserve the right to do so. That's not on the docket,'' he said.

Microsoft said May 18 that it revived talks with Yahoo, without providing specifics. Ballmer declined to say much more, even when pressed.

''All I'll say is we're in ongoing discussions with them around a partnership,'' he said.

Gates let Ballmer take the questions about Yahoo. When asked for his thoughts, Gates said, ''I've been supportive of everything Steve has done. ... Totally supportive.''

Ballmer, responding to an audience question, denied that the bid tarnished Microsoft's reputation.

''If anything, I think people know we're very serious about our online business,'' he replied.

Microsoft has divulged little about its Widows 7 operating system -- even after introducing the touch-screen feature Tuesday -- a contrast to the much-hyped release of Vista.

Chris Flores, a director on Microsoft's Windows client communications team, said in a posting on a company blog Tuesday that the more circumspect tack was deliberate and intended to avoid announcing plans that may change.

''With Windows 7, we're trying to more carefully plan how we share information with our customers and partners,'' he wrote.

The executives regaled the audience with tales of how they met and Microsoft's early days.

Ballmer, who was best man at Gates' wedding, remembered Gates at Harvard as quiet and shy but with ''a certain kind of spark, particularly later in the day.''

Gates remembered Ballmer for his energy, a reputation that persists today.

''Steve was signed up for more things than anybody else. He was very, very busy,'' Gates said.

Ballmer said he had to plead to grow Microsoft's payroll from 30 employees and that he had to assume the duties of the company bookkeeper, who left on Ballmer's first day. Gates was rightfully worried about bankruptcy.

When Ballmer began to question why he left business school at Stanford, Gates laid out his vision of a computer at every desk. Ballmer stayed put, leading to a 28-year partnership at the company helm.

''I was forced to be particularly articulate that night,'' Gates recalled.

Ballmer, known as a marketing guru, said he has been Gates' ''junior partner'' for the last eight years, when Gates left the CEO job. He said he has never been uncomfortable with Gates' much bigger fame, though he admitted struggling to adapt to his new relationship with Gates during his first year as CEO.

''I was not sure how much rope to give,'' he said.

Ballmer said he does not anticipate similar transition struggles when Gates steps down from daily responsibilities.