Thursday, September 18, 2008

Making Money from Social Ties

Thursday, September 18, 2008

Making Money from Social Ties

Advertisers are building complex applications to try to engage users on social networks.

By Erica Naone


Social networks might be popular, but the industry is struggling to find a way to turn all those users into a big payday. This conundrum is the focus of industry executives gathered at the Social Ad Summit in New York this week, and while no company has yet found the perfect solution, advertising campaigns that make far better use of social-network functionality are starting to offer hope of richer returns.

According to Michael Lazerow, founder of social-advertising company Buddy Media, 37 percent of adult Internet users (and 70 percent of teens) in the United States use social-networking sites regularly. Meanwhile, less than 1 percent of all digital-advertising budgets currently flows to social-media sites.

Share of advertising investment might be slim today, but many executives are optimistic about the future. "It seems a little like the search industry was in the mid to late '90s," says Martin Green, chief operating officer of instant-messaging company Meebo. During this period--before Google developed its lucrative search advertising model--many search engines found it hard to make money. But search engines are, of course, very different from social-networking sites. Someone using Google may very well be in the right mood to buy something, but people visit social sites to spend time with their friends. Mike Trigg, director of marketing for the social network hi5, says that advertisers have to ask whether an ad campaign resonates with the reasons why users come to social sites in the first place. "The campaigns we're seeing have the most success are very interactive," Trigg says.

To exploit this, advertisers are turning to social-network programming tools like Facebook Platform and OpenSocial. Buddy Media builds applications for advertisers that are a far cry from simple banner ads. One Facebook application, RUN-Dezvous, created for sports-shoe maker New Balance, is a running game that encourages users to challenge their friends to a virtual race. Points earned through the game can even be converted into credit toward a pair of the company's sneakers. Another application, Launch a Package (an advertisement for FedEx), lets friends share large files by "flinging them" to each other using a gamelike interface.

Attention grabbing: Advertisers are turning to complex applications to reach users on social networks. Above is a game developed by Buddy Media for New Balance that encourages Facebook users to compete with their friends and rack up points that they can use to buy shoes.
Credit: Buddy Media/New Balance

To exploit this, advertisers are turning to social-network programming tools like Facebook Platform and OpenSocial. Buddy Media builds applications for advertisers that are a far cry from simple banner ads. One Facebook application, RUN-Dezvous, created for sports-shoe maker New Balance, is a running game that encourages users to challenge their friends to a virtual race. Points earned through the game can even be converted into credit toward a pair of the company's sneakers. Another application, Launch a Package (an advertisement for FedEx), lets friends share large files by "flinging them" to each other using a gamelike interface.

But to make social advertisements truly engaging, advertisers need to better understand how to grab, and hold on to, users' attention. Ian Swanson, CEO of Sometrics, a company that provides tools for measuring social-advertising success, says that user response to banner advertising tends to plummet dramatically after the first five page views. For this reason, Swanson says, efforts to integrate ads with the actions that users want to take on the site are particularly important. Buddy Media's Lazerow adds that in the future, advertising applications will need to be even more complex and scalable so that they can handle large numbers of users if necessary.

For the time being, however, advertising money is spread pretty thin across social networks. Many developers see advertising as a way to make cash from their programs. Advertising firms are also looking to either build applications or hire companies to do it for them. Don Steele, vice president of digital marketing at MTV Networks, says that the company has spent half a million dollars advertising on social networks this year--a relatively small number, considering the size of the company and its advertising efforts.

Although much of the focus has been on advertising, a few experts have raised the idea that some users might be willing to pay subscription fees to use applications. Clara Shih is the director of the AppExchange product line at Salesforce.com, which provides an online marketplace for business applications. While business users tend to be more willing to accept the idea of paying for third-party applications, she believes that the same might be true of some social-network users. This could very well free social networks from the need to woo advertisers at all.

Searching Video Lectures

Monday, November 26, 2007

Searching Video Lectures

A tool from MIT finds keywords so that students can efficiently review lectures.

By Kate Greene


Researchers at MIT have released a video and audio search tool that solves one of the most challenging problems in the field: how to break up a lengthy academic lecture into manageable chunks, pinpoint the location of keywords, and direct the user to them. Announced last month, the MIT Lecture Browser website gives the general public detailed access to more than 200 lectures publicly available though the university's OpenCourseWare initiative. The search engine leverages decades' worth of speech-recognition research at MIT and other institutions to convert audio into text and make it searchable.

The Lecture Browser arrives at a time when more and more universities, including Carnegie Mellon University and the University of California, Berkeley, are posting videos and podcasts of lectures online. While this content is useful, locating specific information within lectures can be difficult, frustrating students who are accustomed to finding what they need in less than a second with Google.

"This is a growing issue for universities around the country as it becomes easier to record classroom lectures," says Jim Glass, research scientist at MIT. "It's a real challenge to know how to disseminate them and make it easier for students to get access to parts of the lecture they might be interested in. It's like finding a needle in a haystack."

The fundamental elements of the Lecture Browser have been kicking around research labs at MIT and places such as BBN Technologies in Boston, Carnegie Mellon, SRI International in Palo Alto, CA, and the University of Southern California for more than 30 years. Their efforts have produced software that's finally good enough to find its way to the average person, says Premkumar Natarajan, scientist at BBN. "There's about three decades of work where many fundamental problems were addressed," he says. "The technology is mature enough now that there's a growing sense in the community that it's time [to test applications in the real world]. We've done all we can in the lab."

Looking at lectures: MIT is offering a video search tool that can pinpoint keywords in audio and video lectures. Here, a search for “exoskeleton and gasoline” results in this video clip. The automated transcript of the lecture appears below the video.
Credit: MIT

A handful of companies, such as online audio and video search engines Blinkx and EveryZing (which has licensed technology from BBN) are making use of software that converts audio speech into searchable text. (See "Surfing TV on the Internet" and "More-Accurate Video Search".) But the MIT researchers faced particular challenges with academic lectures. For one, many lecturers are not native English speakers, which makes automatic transcription tricky for systems trained on American English accents. Second, the words favored in science lectures can be rather obscure. Finally, says Regina Barzilay, professor of computer Science at MIT, lectures have very little discernable structure, making them difficult to break up and organize for easy searching. "Topical transitions are very subtle," she says. "Lectures aren't organized like normal text."

To tackle these problems, the researchers first configured the software that converts the audio to text. They trained the software to understand particular accents using accurate transcriptions of short snippets of recorded speech. To help the software identify uncommon words--anything from "drosophila" to "closed-loop integrals"--the researchers provided it with additional data, such as text from books and lecture notes, which assists the software in accurately transcribing as many as four out of five words. If the system is used with a nonnative English speaker whose accent and vocabulary it hasn't been trained to recognize, the accuracy can drop to 50 percent. (Such a low accuracy would not be useful for direct transcription but can still be useful for keyword searches.)

The next step, explains Barzilay, is to add structure to the transcribed words. Software was already available that could break up long strings of sentences into high-level concepts, but she found that it didn't do the trick with the lectures. So her group designed its own. "One of the key distinctions," she says, "is that, during a lecture, you speak freely; you ramble and mumble."

To organize the transcribed text, her group created software that breaks the text into chunks that often correspond with individual sentences. The software places these chunks in a network structure; chunks that have similar words or were spoken closely together in time are placed closer together in the network. The relative distance of the chunks in the network lets the software decide which sentences belong with each topic or subtopic in the lecture.

The result, she says, is a coherent transcription. When a person searches for a keyword, the browser offers results in the form of a video or audio timeline that is partitioned into sections. The section of the lecture that contains the keyword is highlighted; below it are snippets of text that surround each instance of the keyword. When a video is playing, the browser shows the transcribed text below it.

Barzilay says that the browser currently receives an average of 21,000 hits a day, and while it's proving popular, there is still work to be done. Within the next few months, her team will add a feature that automatically attaches a text outline to lectures so users can jump to a desired section. Further ahead, the researchers will give users the ability to make corrections to the transcript in the same way that people contribute to Wikipedia. While such improvements seem straightforward, they pose technical challenges, Barzilay says. "It's not a trivial matter, because you want an interface that's not tedious, and you need to propagate the correction throughout the lecture and to other lectures." She says that bringing people into the transcription loop could improve the accuracy of the system by a couple percentage points, making user experience even better.

A Face-Finding Search Engine

Wednesday, September 17, 2008

A Face-Finding Search Engine

A new approach to face recognition is better at handling low-resolution video.

By Kate Greene

Fuzzy faces: A new face-recognition system from researchers at Carnegie Mellon works even on low-resolution images.
Credit: Pablo Hennings-Yeomans

Today there are more low-quality video cameras--surveillance and traffic cameras, cell-phone cameras and webcams--than ever before. But modern search engines can't identify objects very reliably in clear, static pictures, much less in grainy YouTube clips. A new software approach from researchers at Carnegie Mellon University could make it easier to identify a person's face in a low-resolution video. The researchers say that the software could be used to identify criminals or missing persons, or it could be integrated into next-generation video search engines.

Today's face-recognition systems actually work quite well, says Pablo Hennings-Yeomans, a researcher at Carnegie Mellon who developed the system--when, that is, researchers can control the lighting, angle of the face, and type of camera used. "The new science of face recognition is dealing with unconstrained environments," he says. "Our work, in particular, focuses on the problem of resolution."

In order for a face-recognition system to identify a person, explains Hennings-Yeomans, it must first be trained on a database of faces. For each face, the system uses a so-called feature-extraction algorithm to discern patterns in the arrangement of image pixels; as it's trained, it learns to associate some of those patterns with physical traits: eyes that slant down, for instance, or a prominent chin.

The problem, says Hennings-Yeomans, is that existing face-recognition systems can identify faces only in pictures with the same resolution as those with which the systems were trained. This gives researchers two choices if they want to identify low-resolution pictures: they can either train their systems using low-resolution images, which yields poor results in the long run, or they can add pixels, or resolution, to the images to be identified.

The latter approach, which is achieved by using so-called super-resolution algorithms, is common, but its results are mixed, says Hennings-Yeomans. A super-resolution algorithm makes assumptions about the shape of objects in an image and uses them to sharpen object boundaries. While the results may look impressive to the human eye, they don't accord well with the types of patterns that face-recognition systems are trained to look for. "Super-resolution will give you an interpolated image that looks better," says Hennings-Yeomans, "but it will have distortions like noise or artificial [features]."

Together with B. Vijaya Kumar, a professor of electrical and computer engineering at Carnegie Mellon, and Simon Baker of Microsoft Research, Hennings-Yeomans has tested an approach that improves upon face-recognition systems that use standard super-resolution. Instead of applying super-resolution algorithms to an image and running the results through a face-recognition system, the researchers designed software that combines aspects of a super-resolution algorithm and the feature-extraction algorithm of a face-recognition system. To find a match for an image, the system first feeds it through this intermediary algorithm, which doesn't reconstruct an image that looks better to the human eye, as super-resolution algorithms do. Instead, it extracts features that are specifically readable by the face-recognition system. In this way, it avoids the distortions characteristic of super-resolution algorithms used alone.

In prior work, the researchers showed that the intermediary algorithm improved face-matching results when finding matches for a single picture. In a paper being presented at the IEEE International Conference on Biometrics: Theory, Systems, and Applications later this month, the researchers show that the system works even better, in some cases, when multiple images or frames, even from different cameras, are used.

Make me a match: The “probe images” along the top row are used to query a database of stored “gallery images,” much like keywords entered into a Web search engine. When faces match, as they do along the diagonal, the resulting composite image has smooth features. Blurred features indicate a mismatch.
Credit: Pablo Hennings-Yeomans

The approach shows promise, says Pawan Sinha, a professor of brain and cognitive sciences at MIT. The problem of low-resolution images and video "is undoubtedly important and has not been adequately tackled by any of the commercial face-recognition systems that I know of," he says. "Overall, I like the work."

Ultimately, says Hennings-Yeomans, super-resolution algorithms still need to be improved, but he doesn't think it would take too much work to apply his group's approach to, say, a Web tool that searches YouTube videos. "You're going to see face-recognition systems for image retrieval," he says. "You'll Google not by using text queries, but by giving an image."

http://www.technologyreview.com/Infotech/19747/?a=f

Better Face-Recognition Software

Rating Facial Expressions


Tuesday, September 16, 2008

Program turns to online masses to improve patents

Monday, September 15, 2008

Program turns to online masses to improve patents

By Associated Press

WASHINGTON (AP) _ Some of the biggest players in the technology industry complain that the U.S. patent system is broken -- putting too many patents of dubious merit in the hands of people who can use them to drag companies and other inventors to court.

And Blaise Mouttet, a small inventor in Alexandria, Va., thinks he knows why. The problem, he said, is that "there are too many lawyers and not enough inventors involved with the patent system."

So Mouttet is taking part in an experimental program launched in June 2007 with the U.S. Patent and Trademark Office and backed by the technology industry that is intended to give the public -- including inventors -- more of a voice in the system.

The concept behind the program, called Peer-to-Patent, is straightforward: Publish patent applications on the Web for all to see and let anyone with relevant expertise -- academics, colleagues, even potential rivals -- submit commentary to be passed along to the Patent Office.

By using the power of the Internet to tap the wisdom of the masses, Peer-to-Patent aims to dig up hard-to-find "prior art" -- evidence that an invention already exists or is obvious and therefore doesn't deserve a patent.

The goal is to locate prior art that Patent Office examiners might not find on their own -- and to produce better patents by reducing ones granted on applications that aren't novel. The hope is that this will drive innovation by improving the patent process and reducing the patent infringement lawsuits clogging the courts.

"The Patent and Trademark Office is the agency of citizen creativity, and it needs more and better information to do its job of awarding patents to those citizens who are truly the most creative," said New York Law School professor Beth Noveck, who came up with the idea for Peer-to-Patent while teaching a patent law class. "A patent is a pretty significant monopoly, so we want to make sure we are giving it to the right people."

Peer-to-Patent has attracted financial support from a cross-section of the technology sector and foundations and is in its second pilot year. In the first year, the voluntary program focused on software, computer and information security patents -- drawing applications from industry heavyweights such as International Business Machines Corp., Hewlett-Packard Co., Microsoft Corp., General Electric Co. and open source software pioneer Red Hat Inc., as well as small inventors like Mouttet.

Mouttet, a former Patent Office examiner and now a graduate student in electrical engineering, submitted an application on electronic uses of nanomaterials. Although the Patent Office has rejected his claim -- in part because of prior art unearthed through Peer-to-Patent -- he is appealing the decision and optimistic he will eventually get his patent. And he is confident it will be stronger for having gone through the process.

But it is the big technology companies that have the highest hopes for Peer-to-Patent since they are some of the most vocal critics of the existing system.

They warn that the Patent Office has been overwhelmed by a sharp increase in patent applications in recent years, particularly in computing. The agency has more than 5,800 examiners with specialized expertise in a range of areas, but they are sifting through a mountain of applications: 467,243 were submitted in fiscal 2007, up from 237,045 in fiscal 1997 and 137,173 in fiscal 1987.

As a result, said Dave Kappos, vice president of intellectual property law for IBM, it is taking big technology companies with huge patent portfolios longer and longer to get applications through the system. The Patent Office had a backlog of nearly 761,000 applications at the end of fiscal 2007, with applicants waiting an average of two years and eight months for a final decision.

That is tough for an industry built on rapid innovation, short product life cycles and technology that can become quickly outdated, Noveck said. Indeed, a key benefit of participating in the Peer-to-Patent program is the promise of an expedited review, with a preliminary Patent Office decision in as few as seven months.

Backlog is only part of the problem, however. Poor patent quality is just as big a concern.

There are plenty of examples of controversial patents in different industries, such as the one awarded to Amazon.com Inc. for its "1-click" online shopping feature or the one granted to J.M. Smucker Co. for a crustless peanut-butter-and-jelly sandwich.


But some of the most contentious patents have come out of the tech sector since software and other-cutting edge technologies are relatively new to the Patent Office and evolving quickly, explained Mark Webbink, director of New York Law School's Center for Patent Innovations, home to Peer-to-Patent, and former general counsel for Red Hat. That means that patent examiners don't have long-established databases of existing inventions to consult in reviewing these applications.

"With technology, the prior art often can't be found in existing patents or academic journal articles," Noveck said. "It could exist in a string of computer code posted online somewhere that isn't indexed."

The result of substandard patents, tech companies say, has been a sharp increase in costly infringement lawsuits that eat up valuable resources and threaten to keep innovative products off the market. According to James Bessen and Michael J. Meurer of Boston University School of Law, 2,830 patent lawsuits were filed in U.S. district courts in 2006, up from 1,840 in 1996 and 1,129 in 1986.

Technology companies are particularly vulnerable to infringement litigation since their products can contain hundreds if not thousands of linked patented components critical to their basic operation. In one closely watched case, a protracted legal battle nearly forced the shutdown of the popular BlackBerry wireless e-mail service.

The BlackBerry has in fact become a rallying cry for technology lobbyists pressing Congress to overhaul the patent system. Among other things, the industry wants to streamline the patent approval process and limit damages and injunctions awarded to patent holders who win infringement cases. But with those proposals stalled in the Senate, Peer-to-Patent offers another way to improve the system, said Curtis Rose, director of patents for Hewlett-Packard.

Not everyone is sold on the concept of Peer-to-Patent. Stephen Key, an inventor in California who has patented everything from toys to container labels, worries that the program requires applicants to put their ideas out there on the Web for anyone to see -- and potentially steal.

Boston University's Meurer also questions how effective Peer-to-Patent will be since he believes the real factor driving the increase in patent litigation is not a lack of prior art, but rather the vague, overly broad scope of too many patent claims today.

"Applicants come in and ask for the sun, moon and stars and they say: 'Let the Patent Office tell me what is and isn't patentable,'" said John Doll, U.S. Commissioner for Patents. "It's a burden on the system."

Indeed, said Stanford Law School professor Mark Lemley, the challenge facing the Patent Office is to find a balance between awarding patents in order to encourage innovation without making it too easy to obtain a patent that can be used to abuse the system.

Noveck believes Peer-to-Patent will help strike that balance. The Patent Office reports that it has issued preliminary decisions on 40 of the 74 applications that have come through the program so far. Of those, six cited prior art submitted only through Peer-to-Patent, while another eight cited art found by both the examiner and peer reviewers.

The question now is whether the program can be scaled to review hundreds or even thousands of applications that extend far beyond the technology arena. So in its second year, Peer-to-Patent is being expanded to include claims covering electronic commerce and so-called "business methods," a controversial category of patents vital to the financial services sector.

Goldman Sachs Group Inc., for one, is submitting a number of applications, including one for an equities trading platform used to raise capital without a public offering. John Squires, Goldman's chief intellectual property counsel, has high hopes for the program.

"This is a way to harness the wisdom of the crowds," Squires said. "Why should the Patent Office have to operate without the benefit of all the information on the horizon?"

Copyright 2008 The Associated Press.

Monday, September 15, 2008

Nightmare on Wall Street

American finance

Nightmare on Wall Street

Sep 15th 2008 | NEW YORK AND WASHINGTON,DC
From Economist.com

A weekend of high drama reshapes American finance


AP

EVEN by the standards of the worst financial crisis for at least a generation, the events of Sunday September 14th and the day before were extraordinary. The weekend began with hopes that a deal could be struck, with or without government backing, to save Lehman Brothers, America’s fourth-largest investment bank. Early Monday morning Lehman filed for Chapter 11 bankruptcy protection. It has more than $613 billion of debt.

Other vulnerable financial giants scrambled to sell themselves or raise enough capital to stave off a similar fate. Merrill Lynch, the third-biggest investment bank, sold itself to Bank of America (BofA), an erstwhile Lehman suitor, in a $50 billion all-stock deal. American International Group (AIG) brought forward a potentially life-saving overhaul and went cap-in-hand to the Federal Reserve. But its shares also slumped on Monday.

The situation remains fluid, and investors stampeded towards the relative safety of American Treasury bonds. Stockmarkets tumbled around the world (though some Asian bourses were closed) and the oil price plummeted to well under $100 a barrel. The dollar fell sharply, and the yield on two-year Treasury notes fell below 2% on hopes the Federal Reserve would cut interest rates at a scheduled meeting on Tuesday. American stock futures were deep in the red too. Spreads on risky credit, already elevated, widened further.

With these developments the crisis is entering a new and extremely dangerous phase. If Lehman's assets are dumped in a liquidation, prices of like assets on other firms' books will also have to be marked down, eroding their capital bases. The government's refusal to help with a bail-out of Lehman will strip many firms of the benefit of being thought too big to fail, raising their borrowing costs. Lehman’s demise highlights the industry’s inability, or unwillingness, to rescue the sick, even when the consequences of inaction are potentially dire.

The biggest worry is the effect on derivatives markets, particularly the giant one for credit-default swaps. Lehman is a top-ten counterparty in CDSs, holding contracts with a notional value of almost $800 billion. On Sunday, banks called in their derivatives traders to assess their exposures to Lehman and work on mitigating risks. The Securities and Exchange Commission, Lehman’s main regulator, said it is working with the bank to protect clients and trading partners and to “maintain orderly markets”.

Government officials believed they had persuaded a consortium of Wall Street firms to back a new vehicle that would take $40 billion-70 billion of dodgy assets off Lehman’s books, thereby facilitating a takeover of the remainder. But the deal died when the main suitors, BofA and Barclays, a British bank, walked away on Sunday afternoon. Both were unwilling to buy the firm, even shorn of the worst bits, without some sort of government backstop.

But Hank Paulson, the treasury secretary, decided to draw a line and refuse such help. After the Fed had bailed out Bear Stearns in March and the Treasury had taken over Fannie Mae and Freddie Mac last weekend, expectations were high that they would do the same for Lehman. And that was precisely the problem: it would have confirmed that the federal government stood behind all risk-taking in the financial system, creating moral hazard that would take years to undo and expanding taxpayers’ liability almost without limit. Conceivably, Congress could have denied Mr Paulson the money he needed even if he had been inclined to bail Lehman out.

This left Lehman with no option but to prepare for bankruptcy. Though the bank has access to a Fed lending facility, introduced after Bear’s takeover by JP Morgan Chase, the collapse of its share price left it unable to raise new equity and facing crippling downgrades from rating agencies. Moreover, rival firms that had continued to trade with it in recent weeks—at the urging of regulators—had begun to pull away in the past few days. The inability to find a buyer is a huge blow to Lehman’s 25,000 employees, who own a third of the company’s now-worthless stock; in such a difficult environment, most will struggle to find work at other financial firms. It also makes for an ignominious end to the career of Dick Fuld, Lehman’s boss since 1994, who until last year was viewed as one of Wall Street’s smartest managers.

Merrill’s rush to sell itself was motivated by fear that it might be next to be caught in the stampede. Despite selling a big dollop of its most rotten assets recently, the market continued to question its viability. Its shares fell by 36% last week, and hedge funds had started to move their business elsewhere. Its boss, John Thain, concluded that it needed to strike a deal before markets reopened. It approached several firms, including BofA and Morgan Stanley, but only BofA felt able to conduct the necessary due diligence in time.

Not only has Mr Thain managed to shelter his firm from the storm, but he has also secured a price well above its closing price last Friday, $29 per share compared with $17. How he managed that in such an ugly market is not yet clear. Ken Lewis, BofA’s boss, is no fan of investment banking, but he is a consummate opportunist, and he has coveted Merrill’s formidable retail brokerage. Still, the deal carries risks. It will be a logistical challenge, all the more so since BofA is in the middle of digesting Countrywide, a big mortgage lender. Commercial-bank takeovers of investment banks have a horrible history because of the stark cultural differences. And it is not clear if BofA has a clear picture of Merrill’s remaining troubled assets.

The takeover of Merrill leaves just two large independent investment banks in America, Morgan Stanley and Goldman Sachs. Both are in better shape than their erstwhile rivals. But this weekend’s events cast a shadow over the standalone model, with its reliance on leverage and skittish wholesale funding. Spreads on both banks' CDSs, which reflect investors views of the probability of default, soared on Monday.

Wall Street has company in its misery. Washington Mutual, a big thrift, is fighting for survival under a new boss. Even more worryingly, so is AIG, America’s largest insurer, thanks to a reckless foray into CDSs of mortgage-linked collateralised-debt obligations. Investors have fled, fearing the firm will need a lot more new capital than the $20 billion raised so far. Prompted by the weekend bloodletting, AIG brought forward to Monday a restructuring that was to have been unveiled on September 25th. This was expected to include the sale of its aircraft-leasing arm and other businesses. It is also reported to be seeking a $40 billion in bridge loan from the Fed, to be repaid once the sales go through, in the hope that this will attract new capital, possibly from private-equity firms.

With Lehman left dangling, official attention is now turning to putting more safeguards in place to soften the coming shock to markets and the economy. The first step has been to encourage Lehman’s counterparties to get together and try to net out as many contracts as possible. On Sunday the Fed also expanded the list of collateral it will accept for loans at its discount window, to include even equities; and dealers may lend any investment-grade security, not just triple-A rated, to the Fed in exchange for Treasury bonds.

Markets are also pricing in some possibility that the Fed will cut its short-term interest rate target from 2% when it meets for a regularly scheduled meeting on Tuesday. That would be an abrupt turnaround from August, when officials figured their next move would be to raise rates, not lower them.

In a sign of how bad things are, even straitened banks are stumping up cash to help the stabilisation efforts. On Sunday, a group of ten banks and securities firms set up a $70 billion loan facility that any of the founding members can tap if it finds itself short of cash.

Even if markets can be stabilised this week, the pain is far from over—and could yet spread. Worldwide credit-related losses by financial institutions now top $500 billion, of which only $350 billion of equity has been replenished. This $150 billion gap, leveraged 14.5 times (the average gearing for the industry), translates to a $2 trillion reduction in liquidity. Hence the severe shortage of credit and predictions of worse to come.

Indeed, most analysts think that the deleveraging still has far to go. Some question how much has taken place. Bianco Research notes that while the credit positions of the 20 largest banks have fallen by $300 billion, to $1.3 trillion, since the Fed started its special lending facilities, the same amount has been financed by the Fed itself through these windows. In other words, instead of deleveraging, the banks have just shifted a chunk of their risk to the central bank. As spectacular as this weekend was, more drama is on the way.

Safe Transactions with Infected PCs

Monday, September 15, 2008

Safe Transactions with Infected PCs

A new tool assumes that a PC is loaded with malware--and protects transactions anyway.

By Erica Naone

Your computer has been breached by malicious hackers: it's completely loaded with malware and spyware. You're about to get online, connect to a financial institution, and make some transactions. Is there anything, at this point, that can keep your identity off the black market? SiteTrust, a tool released today by Waltham, MA, data-security company Verdasys, aims to protect users from fraud, even when their computers have been compromised.

Credit: Technology Review

"Malware is on the rise," says Verdasys chief technology officer Bill Ledingham. Many existing protection technologies don't work against all the malware that's out there, he says, partly because they're built to protect against known attacks. Users, he adds, are often inconsistent about employing antivirus software and keeping it updated, and even when they're not, some malware is sophisticated enough to get through anyway. "Our premise," Ledingham says, "is that, rather than trying to clean up the machines, assume the machine is already infected and focus on protecting the transaction that goes on between the consumer and the enterprise website."

The problem of malware on users' computers is "the number-one problem that the financial institutions are wrestling with today," says Forrester Research senior analyst Geoffrey Turner, an expert on online fraud. Financial institutions can take steps to secure the connections between their servers and their customers' PCs, Turner says; they can even ensure the security of the customer's Web browser. But they're stumped, he says, when it comes to the customer's operating system. Most successful attempts to steal computer users' identities, Turner says, involve using malware to capture their credentials or conduct transactions behind the scenes without their knowledge. "The challenge is, how do you secure the end-user computer?" he says. "Should you even, as a bank, be trying to do that?"

Verdasys thinks that the answer is yes. After licensing SiteTrust from Verdasys, a financial institution would provide it to users as a supplement to their existing antivirus software. Once SiteTrust is downloaded and installed, Ledingham says, it takes up less than a megabyte of disk space. When the user is connected to a protected site, SiteTrust consumes 1 to 2 percent of the computer's processing capacity. While the tool could work with multiple sites, the initial idea is that a customer would receive it for use with a specific website.

SiteTrust bypasses malware because it is essentially a rootkit--a program designed to bury itself deep in a user's operating system, where it can take fundamental control of most of the software running on the machine. The idea, Ledingham says, is that SiteTrust will burrow down to a lower level than any malware on the system. Verdasys has put a lot of research into ensuring that SiteTrust does just that, Ledingham says, but he acknowledges that if the tool becomes successful, online criminals will probably focus on finding ways to go even deeper. He says that Verdasys plans to keep improving the tool, hoping to stay a step ahead of attackers.

When the user types in the URL of a protected site, Ledingham says, SiteTrust steps in. Without changing the appearance of the user's screen, SiteTrust separates the secure transaction from whatever else might be going on in the browser by running a fresh version of the browser code as its own "process." (A process is the series of commands that the computer executes to run a program, and modern computers can run dozens of them at once.) SiteTrust then monitors this process to make sure that no other program tries to interfere with it. As the user interacts with the site, SiteTrust bypasses many of the vulnerabilities of the operating system, instead taking information from the user's keyboard, encrypting it immediately, and sending it to the website. SiteTrust currently runs on Windows machines and works with the Internet Explorer and Firefox browsers, but Ledingham says that the company is working on Linux, Mac, and Safari versions.

SiteTrust is a new application of the technology behind Verdasys's existing product, the Digital Guardian, which is meant to protect businesses against internal theft. The Digital Guardian also uses a rootkit, installed on every computer in an organization, that watches what users do with sensitive information and flags suspicious behavior. Ledingham notes that, although rootkits have caused controversy in the past, particularly when they were installed without users' knowledge, Verdasys has years of experience designing them so that they don't interfere with a computer's normal use. SiteTrust, Ledingham says, includes an uninstall option so that users can completely remove it if they choose, and it doesn't send any background information about the user to the protected site.

Turner says that he appreciates Verdasys's approach with SiteTrust--in particular, the way that the company has planned for the inevitability of online criminals' targeting the tool itself, lining up improvements to make that more difficult. He adds that the company's distribution model is important to getting SiteTrust to consumers. "People aren't aware that they need this level of protection on their own PC," Turner says. Customers aren't likely to look for additional protection unless encouraged to do so by financial institutions that they trust. Turner also notes that receiving the tool from a trusted institution should help counter consumers' general worries about rootkits.

SiteTrust is launching to six million customers of an undisclosed online broker in the near future. The company plans to make additional deals to protect other websites.