Friday, May 23, 2008

Double, oil and trouble

Double, oil and trouble

May 23rd 2008

The price of oil is beyond $130 a barrel. Where will it stop?


THE price of oil may soon hit $200 a barrel—or so, at any rate, believes Shokri Ghanem, Libya’s oil minister. A few years ago such a prediction would have seemed absurd. But the price has doubled in the past year and has risen by 40% this year alone. It touched yet another record, of over $135 a barrel (before dipping slightly) on Thursday 22nd May. So more spectacular increases seem all too plausible.

The immediate trigger for the latest jump seems to have been an unexpected fall in American oil inventories. But stocks, although falling, are not particularly low: in America, they are only slightly below the average of the past five years. Supplies of petrol and other refined fuels are actually a little above average. And since demand for oil and petrol are falling in America, lower stocks are not as much of a worry as they might normally be.

Mr Ghanem’s explanation for this curious state of affairs is to blame speculators, who are investing ever more enthusiastically in oil futures. Many others share his view. Joe Lieberman, an American senator, points out that the value of investment funds that aim to track the price of oil and other raw materials has risen from $13 billion to $260 billion over the past five years. He blames these “index speculators”, for a big part of the commodity-price increases. He and his colleagues in the Senate are so worried that they are contemplating measures to curb the traders’ exuberance.

Yet few bankers agree that speculation has much to do with price rises. For one thing, indexed funds do not actually buy any physical oil, since it is bulky and expensive to store. Instead they buy contracts for future delivery, a few months hence. When the delivery date approaches, they sell their contract to someone who actually needs the oil right away, and then invest the proceeds in more futures. So far from holding oil back from the market, they tend to be big sellers of oil for immediate delivery.

That is important because it means that there is no hoarding, typically a prerequisite for a speculative bubble. Indeed, as discussed, America’s stocks and those of most other countries are at normal levels. If the indexed funds were indeed pushing the price of oil beyond the level justified by supply and demand, then they would be having trouble selling their futures contracts at such high prices before they matured. But there is no sign of that. In fact, until recently, oil for immediate delivery was more expensive than futures contracts.

Economic theory suggests that the future price is simply traders’ best guess of the shape of things to come. And traders seem to be very worried about the future. They recently pushed up the price of oil to be delivered at the end of 2016 to over $145 a barrel.

They seem to be motivated by the sobering realities of supply and demand, rather than reckless speculation. The output of several big oil exporters, such as Russia, Mexico and Venezuela, is declining. Yet none of those countries allows foreign investors unfettered access to develop new fields or increase production from existing ones. Many of the most promising areas for exploration, including Saudi Arabia, Iraq and Iran, are in effect off-limits to Western oil firms. Worse, the cost of developing new fields is rising almost as fast as the oil price. Cambridge Energy Research Associates, a consulting firm, believes it has more than doubled since 2000.

All this means that global oil production is growing only slowly. Global demand, meanwhile, continues to rise, thanks to an ever-increasing thirst for oil in fast-growing developing countries such as China and India. Their increased consumption is more than compensating for falling demand for oil in rich countries. In other words, the investors that Messrs Lieberman and Ghanem accuse of unfounded speculation may instead have concluded that the world will be short of oil for some time to come. Instead of hectoring speculators, perhaps Mr Lieberman should be hounding Mr Ghanem and the leaders of other oil-rich countries to allow foreign oil firms more access.

Why Zappos Pays New Employees to Quit—And You Should Too

Why Zappos Pays New Employees to Quit—And You Should Too

I spend a lot of time visiting with companies and figuring out what ideas they represent and what lessons we can learn from them. I usually leave these visits underwhelmed. There are plenty of companies with a hot product, a hip style, or a fast-rising stock price that are, essentially, one-trick ponies—they deliver great short-term results, but they don’t stand for anything big or important for the long term.

Every so often, though, I spend time with a company that is so original in its strategy, so determined in its execution, and so transparent in its thinking, that it makes my head spin. Zappos is one of those companies. Two weeks ago, I paid a visit to Zappos headquarters in Henderson, Nevada, just outside Las Vegas, and spent time with CEO Tony Hsieh and his colleagues. I could write a whole series of posts (and just might) about what I learned from this incredible operation. But I want to focus this post on one small practice that offers big lessons for leaders who are serious about changing the game in their field—and filling their organization with people who are just as committed as they are.

First, some background. As most of you know, Zappos sells shoes—lots of them—over the Internet. The company expects to generate sales of more than $1 billion this year, up from just $70 million five years ago. Part of the reason for Zappos’s meteoric success is that it got the economics and operations right. It offers customers a huge selection—four million pairs of shoes (and other items, such as handbags and apparel) in a warehouse in Kentucky next to a UPS hub. (If Imelda Marcos visited that warehouse she'd likely have a coronary on the spot.) It also offers free delivery and free returns—if you don’t like the shoes, you box them up and send them back to Zappos for no charge.

So the value proposition is a winner. But it’s the emotional connection that seals the deal. This company is fanatical about great service—not just satisfying customers, but amazing them. The company promises free, four-day delivery. That’s pretty good. But most of the time it delivers next-day service, a surprise that leaves a lasting impression on customers: “You said four days, but I got them the next morning.”

Zappos has also mastered the art of telephone service—a black hole for most Internet retailers. Zappos publishes its 1-800 number on every single page of the site—and its smart and entertaining call-center employees are free to do whatever it takes to make you happy. There are no scripts, no time limits on calls, no robotic behavior, and plenty of legendary stories about Zappos and its customers.

This is a company that’s bursting with personality, to the point where a huge number of its 1,600 employees are power users of Twitter so that their friends, colleagues, and customers know what they’re up to at any moment in time. But here’s what’s really interesting. It’s a hard job, answering phones and talking to customers for hours at a time. So when Zappos hires new employees, it provides a four-week training period that immerses them in the company’s strategy, culture, and obsession with customers. People get paid their full salary during this period.

After a week or so in this immersive experience, though, it’s time for what Zappos calls “The Offer.” The fast-growing company, which works hard to recruit people to join, says to its newest employees: “If you quit today, we will pay you for the amount of time you’ve worked, plus we will offer you a $1,000 bonus.” Zappos actually bribes its new employees to quit!

Why? Because if you’re willing to take the company up on the offer, you obviously don’t have the sense of commitment they are looking for. It’s hard to describe the level of energy in the Zappos culture—which means, by definition, it’s not for everybody. Zappos wants to learn if there’s a bad fit between what makes the organization tick and what makes individual employees tick—and it’s willing to pay to learn sooner rather than later. (About ten percent of new call-center employees take the money and run.)

Indeed, CEO Tony Hsieh and his colleagues keep raising the size of the quit-now bonus. It started at $100, went to $500, and may well go higher than $1,000 as the company gets bigger (and it becomes even more difficult to maintain the all-important culture and obsession with customers.)

It’s a small practice with big implications: Companies don’t engage emotionally with their customers—people do. If you want to create a memorable company, you have to fill your company with memorable people. How are you making sure that you’re filling your organization with the right people? And how much are you willing to pay to find out?

The Microsoft vs Google Endgame

The Microsoft vs Google Endgame

Today, something really interesting happened. Both Google and Microsoft are poised to make fairly dramatic strategic moves. But the moves they're poised to make are polar, extreme opposites: the contrast between them couldn't be sharper or starker.

So let's do exactly that: contrast them, to bring to life many of the issues we've been discussing.

According to an interesting rumour making the rounds: Microsoft is to acquire Yahoo's search business as well as Facebook, and lock both down, to better take on Google. And Google is letting third parties access one of its most valuable assets: it's ad network.

What's really going on here? Microsoft is poised to shift from open to closed. Google is already making exactly the opposite move: shifting from closed to open.

Here's what ex-Microsoftie Robert Scoble has to say about Microsoft's potential moves:

“…These two moves would change everything and totally explain why Facebook is working overtime to keep Google from importing anything.

Google is locked out of the Web that soon will be owned by Microsoft. We will never get an open Web back if these two deals happen.

This has created HUGE value for Microsoft and has handed Steve Ballmer an Internet strategy which brings Microsoft from last place to first in less than a week.”

So have the geeks in Redmond suddenly outsmarted everyone again - simply by going back to extend, embrace, and exterminate? Will value really be created, and power Microsoft back to the top?

Now, buying Yahoo's search business is just a grab for market share in online advertising. It's the second part of the rumour that's more interesting.

What happens if Microsoft buys Facebook and keeps it closed? Not much – because, as I’ve pointed out recently, there are tremendous structural pressures for openness.

Unfortunately for Steve Ballmer, this ain’t your grandpa’s "platform war”. It’s the reverse: only openness can maximize the value of network effects in this space, because there are no hard technological switching costs creating lock-in. For example, yesterday, it was massively costly to recode applications across operating systems, or for consumers to switch all their applications to a new platform – but that’s distinctly not true on the www: in fact, much of the point of the www is to vaporize those tired, obsolete scale economics and switching costs.

That’s why it’s a (massive) fallacy to argue that any value has been “created”. Value might be created when connected consumers can share and trade preference information or applications across social nets. But value is actually foregone if Microsoft acquires a closed Facebook, because opportunities for consumers, developers, and advertisers alike to meaningfully interact are destroyed. That’s what evil really means: coercing others into accepting value destruction.

If that doesn't make sense, read this killer post from Jeff Jarvis, expanding last week's discussion.

Unfortunately for them, and luckily for the rest of us, given these economics, Mark Zuckerberg and Steve Ballmer’s hare-brained scheme for world domination isn’t even serious evil - it’s less Scaramanga than it is Gargamel: born to lose, and destined to fail.

Contrast that with Google's shift to openness - can you see how it unlocks value for everyone? That's why Microsoft's move is a textbook example of how not to think strategically at the edge. It's yet another example why Google is in a class of its own - across the economy - when it comes to next-generation strategy. Google opening up its ad networks is strategic greatness at work.

What Microsoft really needs to do is take a lesson from Google's book, instead of staying trapped in its own fading past. Redmond must understand that yesterday's games of domination and control are obsolete - and that it has to rethink them. How could it do that?

By following many of the principles we've been discussing here - open beats closed, listening beats talking, good beats evil - Microsoft could learn how to play new kinds of games, that lead to new sources of advantage.

Or hell might freeze over: it's just not in Microsoft's command, control, coerce, and crush DNA to be able to make those radical decisions in the first place. Advantage, today, isn't in how you play the game, but what games you can play to begin with: it's in your DNA.

That's my take - but, as always here, your perspectives are probably richer than mine – so fire away in the comments and let's kickstart a discussion.

Thursday, May 22, 2008

How to Sell Services More Profitably

How to Sell Services More Profitably

Product companies often try to differentiate themselves by offering ancillary services. Many struggle to make money at it.

by Werner Reinartz and Wolfgang Ulaga

Manufacturers frequently believe that adding value in the form of services will provide a competitive advantage after their products start to become commodities. When the strategy works, the payoffs are impressive, and a company may even discover that its new service business makes more money than its products. But for every success story, at least five cautionary tales remind us that manufacturing companies will most likely struggle to turn a profit from their service businesses.

Even the best stumble. Consider one large technology firm we studied—a world leader in medical equipment, IT, automotive equipment, and transportation systems. Back in 2003 the company’s €5 billion IT business unit realized that the limited product-related services it offered, such as installation and training, generated twice the 3% to 4% net margins it earned on its increasingly commoditized product offerings. The unit decided, therefore, to invest heavily in developing its service capabilities for large clients. Managers estimated that such customized services would soon generate margins of 15%.

The estimate proved very wide of the mark, and the unit recorded a negative net profit margin of more than 10% in 2005. The venture was a serious loss-maker, costing the group around €260 million in 2005 alone. The losses stemmed from several distinct causes: First, the company found that the back-office production of complex services was much more difficult than expected. Each client’s requirements were highly customized, which meant that little learning and knowledge could be leveraged across cases. Second, the salespeople were used to selling products with basic service contracts attached, and their traditional contacts at target firms were too low in the hierarchy to make decisions about multimillion-euro solutions contracts. Third, much of the knowledge around the service production had to be sourced externally—which proved time- and resource-intensive. The board member responsible for services was frank about the mistakes: “We wanted too much too soon, and we simply weren’t ready for it.”

Over the past three years we have investigated how manufacturers in business markets can develop profitable services. We conducted in-depth studies of 20 industrial companies operating in a broad variety of product markets, including adhesives, automotive coatings and glass, bearings, cables and cabling systems, energy generation and distribution, onboard electronics for civilian and military aircraft, printing presses, and specialty chemicals. Every firm was among the top three in its industry, and the managers we interviewed were all key decision makers, frequently executive board members. Throughout the process we interviewed multiple people in different business units and country organizations. We went on to have discussions with more than 500 B2B managers in a series of executive workshops; these complemented the insights from our interviews.

As our research process unfolded, we uncovered a wide variation in revenues and profits from service offerings. One group of companies derived up to half of their sales from services, and margins up to eight times those on product sales. A second group reported a very different experience: Although those companies had made significant investments in the development of services, customers proved unwilling to pay, revenues were low, and the companies barely broke even. Comparing the two groups, we were able to identify clear differences in the ways they had developed their service businesses.

Like the technology company in our example (which has since turned itself around in this respect), companies unsuccessful at developing service businesses have tried to transform themselves too quickly. Successful firms begin slowly, identifying and charging for simple services they already perform and using those to build enthusiasm for adding more-complex ones. They then standardize their delivery processes to be as efficient as their manufacturing ones. As their services become more complex, they ensure that their sales force capabilities keep pace. Finally, management switches its focus from the company’s processes and structures to the nature of customers’ problems, the opportunities that customers’ processes afford for inserting new services, and the new capabilities needed to deliver those services. (See the exhibit “The Path to Profits in Industrial Services.”) Let’s take a closer look at those four steps.

1: Recognize That You Are Already a Service Company

Many product companies are in the business of delivering services; they just haven’t realized it yet. These companies are missing out on the revenues they could generate simply by charging for what they already do. The first step in expanding a service capability is to make both the company’s managers and its customers aware of the value provided by existing services.

Take the pharmaceuticals giant Merck. In one of the company’s product categories, its French subsidiary had a long-standing tradition of including delivery in its product price for customers. Because specialty chemicals are high in value but low in volume, Merck had never questioned its responsibility to assume transportation and insurance costs, which represent a tiny fraction of the amount invoiced. And because no shipping costs were itemized, customers were unaware of the value Merck provided. A few years ago the company put this tradition to a test: Managers randomly selected 100 customers and changed the terms of delivery from “shipping and insurance paid” to “ex works,” though the bottom line barely changed. Ninety percent of those customers readily paid the additional charges, seemingly without noticing. Of the 10% that recognized the change, only half insisted on returning to the prior terms of payment. Merck re-established the original terms for those customers—but it had succeeded in managing the transition from “free to fee” for the other 95%. Once the new billing terms had been rolled out to the entire customer base in France, Merck’s profitability in this product category improved significantly, even though the cost to customers was minor.

Switching services from free to fee clarifies the value of the assets involved for both managers and customers. The French gas provider Air Liquide also took this tack. The company had traditionally purchased millions of cylinders in which to transport small quantities of gas to industrial customers. It charged customers only for the gas delivered, supplying the cylinders free. Consequently, customers took no special notice of how many cylinders they had accumulated, and the company was stuck with considerable floating inventory. Starting in the mid-1990s, however, Air Liquide charged a small rental fee of €5 to €7 per cylinder per month. Not only did this turn a profit drain into a profit engine, generating several hundred million euros a year in fees, but it created customer awareness. Once the gas cylinders had a price tag, customers wanted to optimize their inventories. As a result, Air Liquide was able to sharply reduce its floating inventory, transferring cylinders from customers that didn’t need them to customers that did.

Large companies can uncover profitable existing service offerings simply by comparing billing practices across their operating units. We found that at Nexans, the world’s leading cable manufacturer, subsidiaries in some countries were charging customers a fee for cable drums, whereas in other countries they were not. Nexans holds large inventories of high-voltage cable in order to ensure rapid delivery; applying the fee across the whole company represented a significant opportunity to recoup its investment in working capital.

Smart companies will put a senior executive in charge of looking at practices in other business lines to uncover hidden services. He or she can then start crafting a forward-looking strategy for services on the backs of early wins. By giving the process an owner early on, companies can ensure that their service initiatives are not just opportunistic ideas developed by individual business units but part of a strategy to capture best practices and roll them out across the organization. Schneider Electric, a French electrical-equipment company, chose this route. Early in its move toward services it created a strategic deployment and services division whose executive vice president was charged with auditing existing services across the organization and then creating a coherent strategy for offering new services. An executive board member with 20 years of experience in the company was named to the post.

2: Industrialize the Back Office

Manufacturers are accustomed to stable and controllable production processes. But when they venture into value-added services, they may find front-office service customization turning into a delivery-costs nightmare. Unless they can prevent this, their service margins will suffer. One of the managers we interviewed explained, “To earn money in services, you need to industrialize the back office. Companies like GE and IBM really are process freaks.”

The German printing-equipment maker Heidelberger Druckmaschinen (Heidelberg) has experienced a back-office dynamic that can occur when manufacturing companies move into services. In France its customers currently choose one of two ways to maintain their printing equipment: pay-as-you-go, in which case Heidelberg sends an invoice for parts and labor each time a field technician responds to a customer’s call; or a full-service contract, in which case customers have access to a help desk, remote monitoring, and preventive maintenance. The trouble is that full-service customers call for assistance twice as often as pay-as-you-go customers. And because those customers have no reason to monitor costs, Heidelberg’s field technicians replace spare parts on their printing presses much more readily, make on-site visits to them much more frequently, and are likelier to schedule those visits poorly or to forget essential equipment, necessitating yet more visits. (The technicians, for their part, tend to assume that all costs are covered by the hefty full-service fee.) All this erodes Heidelberg’s margins on the full-service contracts, making them less profitable than pay-as-you-go.

There are three ways companies can prevent delivery costs from eating up their service-offering margins. First, they can build flexible service platforms that meet customers’ varying needs while relying on common delivery processes, much as good manufacturers create distinct product models based on standard product platforms. One of our interviewees explained, “We offer six different types of maintenance contracts. Eighty percent of customers fit into one of these boxes. The customer can look at these offers and see which of them best matches his situation.”

Second, the successful firms in our study continually monitored the costs of their processes to identify profit drains. Air Liquide appointed an executive with specific responsibility for trying to standardize services in the organization. Backed by top management and supported by an internal task force, this executive taught managers and frontline employees in operational units how to systematically take costs out of service production and delivery processes while making sure that customers still got what they expected. For example, Air Liquide regularly mailed gas-consumption reports to its customers. But when the standardization team reviewed this practice, they found that some customers made no use of the information. By discontinuing that part of the service package for those customers, Air Liquide was able to reduce its costs to serve selected customers while maintaining the perceived value of the service provided.

Third, successful companies are quick to exploit process innovations made possible by new technologies. The Swedish bearings manufacturer SKF helps customers extend the service life of their equipment by enabling off-site access to an electronic monitoring tool via a secure internet browser. Vibration-analysis data, for instance, can alert a customer early about potential machine failure. Such smart services allow the company to perform first-level maintenance without deploying field technicians for on-site visits.

3: Create a Service-Savvy Sales Force

As long as a company considers services to be an add-on to existing products, its sales force—with some training, of course—will probably be able to handle both product and service sales. But if companies are to move away from straightforward product-related services into more complex customer solutions, managers must take a new look at sales management strategies. Services require longer sales cycles, and the sales process is often more complex and strategic, meaning that decisions are made high up in the customer’s hierarchy.

Failure to recognize this challenge got Heidelberg into trouble. In the early 2000s the company started offering its customers remote monitoring of their printing presses—to be sold as an add-on, because the service could save customers many hours of expensive machine downtime. On average, one hour of downtime in a print shop can cost several hundred euros; given that the lead time for delivering spare parts to a customer’s site is typically 24 hours, a single breakdown may cost thousands. Heidelberg priced its new offering significantly below this amount, but customers did not bite. The problem was that although the company’s sales force and field technicians were well equipped to promote standard service contracts, they weren’t up to explaining more complex customer solutions—largely because they were accustomed to discussing terms with people in procurement (who tend to focus on cost per part or per service) or people in charge of in-house maintenance (who might view a service offering as a threat to their jobs). What Heidelberg needed was a sales force that felt comfortable talking to production managers—people who would see the implications of the new service for the total cost picture.

Product salespeople are often actively inimical to change, as Air Liquide quickly discovered when it started offering services. Top salespeople argued that margins from the firm’s traditional product business were already high enough and that the company still had room for growth in its existing product markets. Services were labor-intensive, would tie up considerable financial resources, and could harm product sales if the company failed to deliver on its promises.

The successful manufacturers we studied all took pains to retrain their sales forces. In a major overhaul of its sales organization, Schneider Electric switched the focus of its salespeople from cost-plus pricing to value-based pricing when promoting its services. This involved educating them about how their customers’ managers justified decisions internally, so that the salespeople could help the managers they dealt with take more responsibility for shaping decisions. But even after extensive training, companies may find that they have little choice but to fire and hire; a few in our study replaced 80% of their existing sales forces. Even those that managed to retain a significant proportion of their people still needed some specialized newcomers. Air Liquide hired several agri-food engineers to develop sales expertise for services in food-processing industries across Europe. The French forklift manufacturer Fenwick recruited specialists at the corporate level and in each of its regional sales offices to promote services attached to tri-directional forklift trucks. A good place to find this talent internally is among service-support staff members.

Most of the successful companies we studied made some kind of distinction between product and service salespeople. At GE Medical Services, for example, product salespeople are “hunters,” expected to go out and get orders for new equipment. Service salespeople are called “farmers”; GE expects them to grow their relationships with customers and sell services over time. Splitting the sales force is not always a perfect solution, however. Xerox has been very successful in establishing a solutions business in which the focus is not on providing office equipment but, rather, on helping clients manage their document flow. The organization nevertheless continues to do considerable business the old way, by selling printers and copiers and slapping simple service contracts on them. The two units end up competing for midsize customers: Whichever unit is first to get a lead pursues the opportunity vigorously, not wanting the other to get involved—even if it might be more suitable from a companywide perspective.

It almost goes without saying that a move to services will fail unless salespeople are financially motivated to promote them instead of focusing solely on product sales. Such a shift is difficult when product revenues are much higher than service revenues. For example, if Air Liquide supplies €500,000 worth of gas to an individual customer, the related services may be invoiced at only a few thousand euros. If objectives for service and product sales are not properly coordinated, their sales forces may even compete. When Air Liquide started to promote inventory management services to assist customers in optimizing the number of gas cylinders they had on hand, the company’s product sales force resisted out of fear of losing its traditional revenues. Management had to explain that although the new offering would indeed enable customers to reduce their on-site inventories, it would also help to lock in customers over the long run and to grow Air Liquide’s share of their purchasing overall. To reduce conflict between the two sales forces, Air Liquide created a double credit system: For each closed deal, product and service salespeople would get the same commission.

Finally, selling services requires that companies develop tools to document and communicate the value those services create for customers. These tools range from customer case studies and white papers to sophisticated simulation software. A good example is Documented Solutions, a tool developed by SKF over the past 15 years. Conceived by the company’s U.S. subsidiary, it helps SKF salespeople worldwide to identify and explain to customers how much they can save by using the company’s services. The tool is linked to a database that compares the best practices of SKF customers around the globe. It also allows customers to calculate their return on investment.

4: Focus on Customers’ Processes

Once manufacturers have learned how to sell and deliver services in a cost-efficient way, they can move toward addressing customers’ problems and processes holistically. This means shifting focus from their own processes, incentives, and structures to those of the customer.

Fenwick found a good way to do this: It installed data-collecting sensors and radio-frequency identification technology in its forklifts, to amass valuable information about how customers used its equipment. This knowledge became the basis for developing new service offerings, including access control and remote monitoring, asset management, a customer-specific intranet—Fenwick Online—and even a school for forklift drivers. Today 50% of the company’s €500 million in revenues comes from services developed over the past 15 years.

When manufacturers move beyond ancillary product-related services to complex offerings, they need to revisit the basis for their pricing and the way they measure success. Product-oriented companies typically focus on input-based indicators—hours of equipment use and numbers of units sold. As long as their services are discrete and productlike and performance risk is limited, that focus is entirely appropriate. In such cases, services are viewed as products in both the back and front offices—meaning that their input costs take center stage. But above that level they require companies to focus on problem solving from the customer’s perspective. When a company commits to solving a customer’s problem, it assumes a much higher risk: The goal is to achieve a certain output, and the degree to which it is achieved is the basis for compensation. This was true for all the successful companies we studied. Clearly, pricing then becomes much more complex. The French jet-engine maintenance company Snecma Services, for example, writes service contracts that guarantee air carriers a certain number of flight hours for their jet engines, however much servicing time that requires. Similarly, Hilti, headquartered in Liechtenstein, promotes an “all-round hassle-free” service package for the power tools it leases to the construction industry. The company’s customers don’t have to buy, say, a drill for their operations; they “pay by the hole” and are guaranteed a drill on the construction site.

Once executives have redefined the value proposition around solving customers’ problems, they may quickly discover a lack of the expertise required to tackle the processes involved. The Pittsburgh-based industrial-coatings specialist PPG offered to take over the paint shop in Fiat’s Torino automotive plant. Under the new deal, Fiat would pay PPG according to the number of cars flawlessly painted rather than the amount of industrial paint bought. PPG had to learn how painting robots function in order to control the outcome of its painting processes. Similarly, when SKF started developing services around its core product—bearings—the company studied how bearings might break down in its customers’ equipment and then acquired the know-how to help manage such breakdowns. Through internal development and acquisition over the past decade, SKF has become a world leader in condition monitoring, industrial sealing, lubrication systems, and vibration analysis.

• • •

Services can be a powerful way to lock in customers and increase their switching costs. As one manager at Air Liquide put it, “The more we enter into a customer’s business, the more the customer forgets how things are done.” At the same time, services represent an excellent route for acquiring new product business. Fenwick managers told us, “Whenever we can’t directly break into a customer account with a product, we’ll offer to provide services on a competitor’s product.” Finally, the relationship developed by providing services positions manufacturers to anticipate future business. But these considerable benefits can’t be achieved overnight. The four steps we’ve outlined will help to speed the process and boost companies’ profits.

Are You Ready for Global Turmoil?

Are You Ready for Global Turmoil?

Recent headlines -- Citigroup’s layoffs, Iceland’s sudden downturn, worldwide food shortages, etc. -- suggest serious global turmoil is ahead. All companies, and multinationals in particular, should be prepared to withstand it.

Unfortunately, many are not. Traditional forecasting and budgeting systems produce linear projections insufficient for risky, uncertain times. What’s needed is scenario planning, where companies stress-test their strategies and processes against a wide range of future scenarios to identify their vulnerabilities. Thus informed, the companies can adjust them to be more responsive and resilient.

But scenario planning often takes a backseat to more immediate concerns: developing new products, fighting an aggressive competitor, meeting earnings targets. So when large-scale external events hit, their impact is seismic.

If your company hasn’t engaged in scenario planning, can you do anything to increase your resilience in the near term?

Yes. Flexibility is the key to dealing with turmoil. You don’t want to be overcommitted to one strategy or one supplier. If your company has stuck its neck way out, as some financial firms did by overbetting on subprime mortgages, then unwind your commitments or hedge your bets. If your company is highly reliant on suppliers into whose operations you have little visibility, get some solid backup options in place. Otherwise, your company could find itself in the unhappy company of Baxter International, Mattel, and numerous pet food companies.

If you have engaged in scenario planning, should recent events cause you to back and revise some of your assumptions?

Yes, but constant revision is standard best practice, even when the external changes are positive. After the Berlin Wall came down, Royal Dutch/Shell’s scenario experts recognized that their strategies would need to account for the emerging geo-political world order: a single superpower, the rise of China, and the opening of former Soviet countries for exploration.

Building resilience and adaptability is a multipronged effort:

  • Use scenario planning to improve your organization’s insight and foresight about the future (Shell, Sprint and the World Bank excel at this).
  • Devise adaptive strategies with sufficient flexibility to deal with the unexpected, including future-proofing your plans using real options thinking, which views strategic investments such as building a plant or investing in R&D as a test rather than a commitment. How future uncertainties play out determines whether scaling up or withdrawing is called for. BP has been strong at such optionality thinking, as has Google, which seeks to develop a broad portfolio of “Googlettes” that they view as call options on the future.
  • Implement a dynamic monitoring system to track the external world in real time, as well as to gauge internal progress on executing strategies and plans. Such a system compensates for the human tendency to overreact to surface features, such as a spike in sales revenue or a drop in interest rates, and under-react to signs of deeper, more fundamental changes (P&G, IBM, and NASA have systems that do this well).
  • Improve your organization’s agility in terms of structure, processes, and rewards to cope better with the unknown. For example, fabric maker WL Gore dispenses with formal titles to minimize bureaucracy and hierarchy. As a result, information flows freely and quickly across organizational boundaries to allow fast action. Microsoft’s structure in the mid-1990s allowed it to turn on a dime in response to Netscape’s download threat.
  • Enhance your information and decision making procedures to remain vigilant, through external networks and by properly balancing traditional tools. Apple’s forays into music and telephony via its iPod and iPhone are the result of savvy market intelligence and decisive leadership by Steve Jobs and others that balances analytical tools with seasoned intuition. Eli Lilly and Deutsche Bank use internal prediction markets to ensure that all information, especially what is discussed informally around the water cooler, reaches senior leaders.
  • Foster strong leadership at multiple levels in the organization to deal better with crises and other unexpected circumstances (GE and McKinsey excel here).

As Darwin observed, it is not the strongest or smartest who survive but those who are most adaptive to change.

Faster Wireless Networks

Wednesday, May 21, 2008

Faster Wireless Networks

Sending descriptions of data could be more efficient than sending the data itself.

By Duncan Graham-Rowe

The role of computer networks would appear to be fairly straightforward: to ferry data from one point to another. But a novel wireless-network protocol developed for the U.S. military breaks with this tradition by sending not the data itself but rather a description of the data. In simulations, a network using the protocol was five times more efficient than a traditional network. Within the next year, the U.S. Defense Advanced Research Projects Agency (DARPA) will test the protocol in field trials at Fort A. P. Hill in Virginia.

The protocol is part of a project to create a new generation of mobile ad-hoc networks--self-configuring networks of mobile wireless nodes--that will enable faster and more reliable tactical communications between military personnel and vehicles, says Greg Lauer, section head for advanced network systems at BAE Systems in Burlington, MA, which helped develop the protocol for DARPA.

But the project also demonstrates the potential of a new and exciting field called network coding, says Muriel Médard, an associate professor of electrical engineering and computer science at MIT, who collaborated on the project with BAE Systems.

Network coding is a relatively young field, though there has been some interest in using it to make the Internet more efficient. Microsoft's peer-to-peer test bed Avalanche, for example, was designed to use network coding to deliver wide-scale on-demand TV and software patches without causing bottlenecks. But problems specific to mobile wireless networks are particularly amenable to solutions that use network coding.

In many ways, the analogy between the Internet and a superhighway is apt, Médard says. A lot of networks are built on a transportation model, with data traveling from address to address. "Data is transported very much like you would transport any other goods," says Médard. The trouble is that when you get a traffic jam, things grind to a halt.

"In a traditional network, you break information into packets and forward them between nodes," says Lauer. If a packet doesn't reach its destination, it will be sent and re-sent until its arrival is confirmed. But in some types of network, such as mobile wireless networks, there is a fairly high chance that the packets won't be received because of interference or limited bandwidth, or because a mobile node has wandered out of range or been destroyed. If nodes keep transmitting data until they receive confirmation, Lauer says, a bottleneck can result.

With network coding this is not an issue. "You take a group of packets and combine them," says Lauer. The result is a single packet that contains traces of information from each of the original packets. This hybrid packet is then sent to one or more additional nodes.

By itself, the hybrid packet just looks like gobbledegook, says Médard. But it includes a small amount of data that acts as a clue to its contents. A single packet won't normally contain enough clues to allow its data to be reconstructed. But as long as the destination node receives enough independent packets from enough different sources, it should be able to recover all the original data, Médard says.

Credit: DARPA

The advantage of this is not only that you are using less bandwidth to send information, and thus avoiding bottlenecks, but also that you don't have to keep track of which node sent what, says Médard.

Surprisingly, even the means of combining data into a single packet at the source node doesn't have to be shared. If the packets contain enough clues, the destination nodes can reconstruct the contents of packets that have been created randomly. You're not sending data, per se, says Lauer, you're sending pieces of algorithms for assembling data.

Network coding is an offshoot of a field called information theory, which has already been put to use in data-compression software. But it's only relatively recently that people have started looking at how network coding could be used to send data. "It turns out it can be extremely powerful," Médard says.

As part of a program funded by DARPA, BAE and MIT used network-coding principles to develop protocols that could be used to send information to multiple destinations. In a conventional network, each node would act like a router, steering specific information toward specific destinations. But in BAE and DARPA's network, all nodes broadcast all information to all other nodes.

In the DARPA simulations, where a tactical mobile network was emulated on an Ethernet network, the researchers experimented to see how much they could reduce the bandwidth of the network while maintaining the same standard of communication. The simulations involved all forms of military data, from voice and video streams to tactical data, and all kinds of conditions, such as interference and poor connectivity.

The researchers found that they could reduce the bandwidth to just one-fifth of that required by a conventional network, with no loss of quality. The upcoming field tests, on the other hand, will investigate whether the protocols can be used to send more data over existing radio networks than standard protocols can, says Lauer.

Network coding is an exciting new area that has attracted a lot of interest, says Christina Fragouli, an expert in network coding at the École Polytechnique Fédérale de Lausanne in Switzerland. And mobile wireless networks are precisely the kind of application that network coding can help with, she says. "It's a very difficult kind of network to deal with," she says, because of its intrinsic interference problems and limited bandwidth.

The protocols have also been tested on a standard Wi-Fi network as a way to stream video, and the results were very promising, says Médard. Further down the line, network coding could help perform security functions. "There are ways to tell if someone has messed around with your data," Médard says.

Alarming Open-Source Security Holes

Tuesday, May 20, 2008

Alarming Open-Source Security Holes

How a programming error introduced profound security vulnerabilities in millions of computer systems.

By Simson Garfinkel

Back in May 2006, a few programmers working on an open-source security project made a whopper of a mistake. Last week, the full impact of that mistake was just beginning to dawn on security professionals around the world.

In technical terms, a programming error reduced the amount of entropy used to create the cryptographic keys in a piece of code called the OpenSSL library, which is used by programs like the Apache Web server, the SSH remote access program, the IPsec Virtual Private Network (VPN), secure e-mail programs, some software used for anonymously accessing the Internet, and so on.

In plainer language: after a week of analysis, we now know that two changed lines of code have created profound security vulnerabilities in at least four different open-source operating systems, 25 different application programs, and millions of individual computer systems on the Internet. And even though the vulnerability was discovered on May 13 and a patch has been distributed, installing the patch doesn't repair the damage to the compromised systems. What's even more alarming is that some computers may be compromised even though they aren't running the suspect code.

The reason that the patch doesn't fix the problem has to do with the specifics of the programmers' error. Modern computer systems employ large numbers to generate the keys that are used to encrypt and decrypt information sent over a network. Authorized users know the right key, so they don't have to guess it. Malevolent hackers don't know the right key. Normally, it would simply take too long to guess it by trying all possible keys--like, hundreds of billions of years too long.

But the security of the system turns upside down if the computer can only use a limited number of a million different keys. For the authorized user, the key looks good--the data gets encrypted. But the bad guy's software can quickly make and then try all possible keys for a specific computer. The error introduced two years ago makes cryptographic keys easy to guess.

The error doesn't give every computer the same cryptographic key--that would have been caught before now. Instead, it reduces the number of different keys that these Linux computers can generate to 32,767 different keys, depending on the computer's processor architecture, the size of the key, and the key type.

Less than a day after the vulnerability was announced, computer hacker HD Moore of the Metasploit project released a set of "toys" for cracking the keys of these poor Linux and Ubuntu computer systems. As of Sunday, Moore's website had downloadable files of precomputed keys, just to make it easier to identify vulnerable computer systems.

Unlike the common buffer overflow bug, which can be fixed by loading new software, keys created with the buggy software don't get better when the computer is patched: instead, new keys have to be generated and installed. Complicating the process is the fact that keys also need to be certified and distributed: the process is time consuming, complex, and error prone.

Nobody knows just how many systems are impacted by this problem, because cryptographic keys are portable: vulnerable keys could have been generated on a Debian system in one office and then installed on a server running Windows in another. Debian is a favored Linux distribution of many security professionals, and Ubuntu is one of the most popular Linux distributions for general use, so the reach of the problem could be quite widespread.

So how did the programmers make the mistake in the first place? Ironically, they were using an automated tool designed to catch the kinds of programming bugs that lead to security vulnerabilities. The tool, called Valgrind, discovered that the OpenSSL library was using a block of memory without initializing the memory to a known state--for example, setting the block's contents to be all zeros. Normally, it's a mistake to use memory without setting it to a known value. But in this case, that unknown state was being intentionally used by the OpenSSL library to help generate randomness.

The uninitialized memory wasn't the only source of randomness: OpenSSL also gets randomness from sources like mouse movements, keystroke timings, the arrival of packets at the network interface, and even microvariations in the speed of the computer's hard disk. But when the programmers saw the errors generated by Valgrind, they commented out the offending lines--and removed all the sources of randomness used to generate keys except for one, an integer called the process ID that can range from 0 to 32,767.

"Never fix a bug you don't understand!" raved OpenSSL developer Ben Laurie on his blog after the full extent of the error became known. Laurie blames the Debian developers for trying to fix the "bug" in the version of OpenSSL distributed with the Debian and Ubuntu operating systems, rather than sending the fix to the OpenSSL developers. "Had Debian done this in this case," he wrote, "we (the OpenSSL Team) would have fallen about laughing, and once we had got our breath back, told them what a terrible idea this was. But no, it seems that every vendor wants to 'add value' by getting in between the user of the software and its author."

Perhaps more disconcerting, though, is what this story tells us about the security of open-source software--and perhaps about the security of software in general. One developer (who I've been asked not to single out) noticed a problem, proposed a fix, and got the fix approved by a small number of people who didn't really understand the implications of what was being suggested. The result: communications that should have been cryptographically protected between millions of computer systems all over the world weren't really protected at all. Two years ago, Steve Gibson, a highly respected security consultant, alleged that a significant bug found in some Microsoft software had more in common with a programmer trying to create an intentional "back door" than with yet another Microsoft coding error.

The Debian OpenSSL randomness error was almost certainly an innocent mistake. But what if a country like China or Russia wanted to intentionally introduce secret vulnerabilities into our open-source software? Well concealed, such vulnerabilities might lay hidden for years.

One thing is for sure: we should expect to discover more of these vulnerabilities as time goes on.

Simson Garfinkel is an associate professor at the naval postgraduate school in Monterey, CA, and a fellow at the Center for Research and Computation and Society at Harvard University.

DNA Sequencing in a Snap

DNA Sequencing in a Snap

An innnovative approach could target hard-to-sequence areas.

By Emily Singer

A novel sequencing technology being developed by a Massachusetts startup allows scientists to take photographs of the sequence of a DNA molecule. William Glover, president of ZS Genetics, based in North Reading, MA, says that his approach will allow scientists to read long stretches of DNA, enabling the sequencing of hard-to-read areas, such as highly repetitive regions in plants and some parts of the human genome. Longer sequences also allow scientists to distinguish between maternal and paternal chromosomes, which might have important diagnostic applications.

Scientists at a recent sequencing conference in San Diego--where details of the technology were presented for the first time--were intrigued by the approach because it is totally different than even the newest methods on the market. "It's surprising and potentially very powerful," says Vladimir Benes, head of the Genomics Core Facility at the European Molecular Biology Laboratory, in Germany.

The cost of DNA sequencing has plummeted since a working draft of the human genome was completed in 2001. Most of the newest technologies currently in use generate very short sequences, about 30 to 150 base pairs, which are then stitched together with special software. But this method doesn't always capture all the information in the genome, and some parts of the genome are difficult to sequence this way, says Glover.

Seeing the sequence: This image shows a cluster of DNA molecules (black strands) that have been synthesized using bases that are specially labeled to be visible under an electron microscope. Scientists are using this technique to develop a novel sequencing technology.
Credit: ZS Genetics

ZS Genetics is a relative newcomer to the field and uses an approach vastly different than any other: electron microscopy. Glover predicts that by next year, the company's technology will be able to generate readable lengths of DNA that are thousands of base pairs long, and he believes that ZS Genetics' sequencing method will improve by a factor of 10 in the next couple of years, making the pieces even easier to assemble. The company was recently accepted as one of the teams in the Archon X Prize for Genomics, a $10 million award for the first privately funded team that can sequence 100 human genomes in 10 days.

"Any technology that can bring the read length to the 1,000 base-pairs range will definitely, at least for de novo sequencing, represent a major breakthrough," says Benes. He says that the approach might be particularly useful for sequencing the genomes of plants, which often have highly complex genomes littered with repetitive sequences that are difficult to assemble computationally.

At a width of 2.2 nanometers, DNA is invisible under an average light microscope.

But electron microscopes, which detect the difference in charge between atoms, have a subnanometer resolution. While the sequence of natural DNA lacks enough contrast to be resolved with electron microscopy, Glover and his colleagues developed a novel labeling system to make the molecules more visible.

Researchers synthesize a new complementary strand of the molecule to be sequenced using bases--the letters that make up DNA--labeled with iodine and bromine. The labeled bases appear as either large or small dots under the electron microscope, allowing scientists to read the sequence. (Three different labels will be required to read the sequence of the four bases found in DNA. Three of the bases will have different labels; the fourth will simply remain unlabeled.)

The substrate on which the newly labeled molecules are imaged is made using semiconductor fabrication techniques. Scientists generate silicon wafers with an 11-nanometer-thick window, which is thin enough for the electron beam of the microscope to discern the DNA molecule from the substrate. ZS Genetics is also working on making even thinner wafers to boost resolution of the image.

DNA has a tendency to curl up into a tangled mass, so one of the biggest challenges has been untangling that ball into linear strands that can be read. Researchers first flow fluid through a microfluidic device with tiny channels. That device fits on top of the DNA-coated wafer. The force of the flow stretches the DNA molecules, which then stick to the silicon. An electron beam is shot through the wafer, and a camera captures the image from the other side. "The wafer is the major proprietary consumable," says Glover. "It will be dirt cheap."

When stretched, the DNA looks like a ladder with the bases forming the rungs. So far, the company has released images of a 23-kilobase piece of DNA using a single type of labeled base. Glover says that he and his team have also done multilabel sequencing, although he declined to give additional details.

Still, the technology has a ways to go before it's market ready. "Lots of proof of principle methods can work in R&D, but bringing it to [market] is not trivial," says Benes. Glover aims to have a prototype this summer that scientists can test, and a faster commercial system next year. He adds that because most of the system relies on existing technologies, it will be easy and inexpensive to upgrade the system with new cameras and software.

Longer reads will allow scientists to look at collections of genetic variations that have been inherited together, known as haplotypes. This kind of analysis can determine if a particular genetic variation has been passed down from the individual's mother or father. Recent research suggests that in some cases, maternal or paternal inheritance can impact the severity of the disease, a phenomenon that may be more common than previously thought.