Saturday, March 14, 2009

Sustainablility


**footnote:Thanks to Denise and co. for lunch!

Friday, March 13, 2009

Why immigration is good for America's business

Entrepeneurialism and immigration

Let them in

Mar 13th 2009
From Economist.com

Why immigration is good for America's business


FOR all its current economic woes, America remains a beacon of entrepreneurialism. Between 1996 and 2004 an average of 550,000 small businesses were created every month. One factor is a fairly open immigration policy. Vivek Wadhwa of Duke University notes that 52% of Silicon Valley start-ups were founded by immigrants, up from around a quarter ten years ago. But since 2001 the threat of terrorism and rising xenophobia has made immigration harder. Today more than 1m people are waiting to be granted legal status as permanent residents. Yet only 85,000 visas a year are allocated to the sort of skilled workers that might go on to found sucessful businesses of their own.

Illustration by Nick Dewar

Making the Shift: Top Five Reasons to Move from an MES to MOM

Making the Shift: Top Five Reasons to Move from an MES to MOM

MOM is a new alternative to managing and improving factory operations and execution in real time -- one that is entirely focused around the specific actions that deliver improved profitability.

When manufacturing execution system (MES) applications became popular in CPG plants and factories around the globe, manufacturing executives thought they were getting systems that would help them monitor the equipment and production on the factory floor.

For most, that's exactly what an MES delivered. However, because few executives failed to challenge the assumption that an MES would also translate to improved profitability, these applications failed to deliver what consumer packaged goods (CPG) companies actually needed.

A recent AMR Research survey of 100 CPG and food/beverage manufacturers revealed that although the use of an MES often leads to tactical operational gains through improved data measurement and collection, it has failed to make a significant impact on operational efficiency, overall profitability and executives' confidence in the accuracy of their plant KPIs -- a lack of confidence that continues to hamper decision making and performance improvement.

As global capital markets remain constrained and consumers continue to trade down to the value end of the product mix, CPG executives are increasingly looking at the plant network as a primary source of margin protection across their product mix. Through this process, many are finding that MES applications simply cannot provide answers to critical questions like "What actions on the production lines will generate savings" "What SKUs can we relocate?" or "How do we reduce changeover times?"

As a result, a growing number of manufacturers have been making a shift from MES applications to manufacturing operations management (MOM). MOM is a new alternative to managing and improving factory operations and execution in real time -- one that is entirely focused around the specific actions that deliver improved profitability.

Let's take a look at the top five reasons companies continue to head in this new direction.

#1: MOM Goes Far Beyond Mere Data Collection and Business Processes Automation

A byproduct of the "if you can't measure it, you can't fix it" era, MES applications have been deployed under the assumption that when you have more data, you can uncover and capitalize on more improvement opportunities.

But what many executives are now realizing is that an MES, although an excellent tool for data collection from machinery, does not in any way improve a process or fix an inefficiency problem. It simply automates the collection of the data related to the problem -- or in other words, it reveals the result of the problem's existence.

Simply collecting data does not create a framework where the workforce can take immediate action to resolve the problem or to improve the outcome. A manufacturer already knows when there are problems; an MES simply helps them recognize this faster. It does not actually help solve the problem.

Proponents of MES applications point to examples of specific improvements in the performance of a specific piece of equipment or a specific line. But what is often left out of the story is the real, material impact such measures have had on increased production, lower cost per case or the output of the factory as a whole -- factors that have now become critical for margin protection.

Many CPG manufacturers now realize that they already possess the assembled internal knowledge to solve these problems. What they lack is a framework to apply that knowledge for tangible and repeatable business improvement. That's where MOM comes in.

#2: MOM Helps CPG Companies Process and Act on Metrics

Instead of first collecting massive amounts of data in order to further analyze what operators and supervisors already know MOM approaches the issue from the opposite end.

MOM rightly assumes that plant workers and management already have the collective experience and intelligence to know the best ways to improve their own performance, as long as they also have access to some straightforward and structured tools and techniques. MOM builds these proven best practices and real-time capabilities directly into the software, thereby providing workers with the tools and framework they need to take appropriate and timely action.

MOM systems enable this timely action by providing constant visual communication to operators and supervisors of performance trends and targets. They also enable productive short-interval control meetings where metrics such as OEE, uptime and yield are evaluated and acted upon. And they remove the traditional paper-based reporting system that only serves to overwhelm the shop floor workers.

Just as important, MOM systems can provide this functionality pre-packaged to fit a specific industry sector. This prevents the manufacturer from having to research, design, build and then "pilot" the application to make it work for their specific situation.

#3: In Plant Performance, MES Has No Statistical Impact

Another interesting finding from the AMR survey was that companies surveyed that had implemented MES applications still faced the same operational challenges as those that had no MES in place.

In fact, companies using MES applications had not seen a significant impact on their ability to identify unused capacity, cut labor costs or effectively diagnose operational or in-plant problems -- factors that directly drive productivity, profitability and responsiveness.

Although using an MES clearly has an advantage for automating business processes and data capture, the survey results revealed that employing an MES does not, by itself, prompt the kind of action that drives improvement.

#4: MOM Employs a Feedback Loop That Makes Action Unavoidable on the Shop Floor

Instead of collecting massive amounts of data that are difficult to communicate and digest and that yield minimal practical feedback to the shop floor, MOM provides operators with a means to help solve problems on their line as they are actually running it.

The reason is simple: MOM recognizes that there is little value in knowing after the fact that you lost an hour of production that could have been avoided. Instead of dwelling on what can't be fixed, MOM empowers the workforce to avoid such losses before they happen. For instance, when workers can self-monitor and respond faster, what would have been a wasted hour is now a 10-minute loss, which means that more product can be produced with little management intervention.

Easy-to-use touch screens allow operators to add key intelligence, such as root causes for stoppages and rejects, all in real time. They help ensure that changeovers are performed on time, by setting expectations and tracking actual changeover time. And they provide a fast and accurate means for the shop floor to report hidden defects and downtime as they occur.

Manufacturers using MOM have found that when problems are suddenly visible and within workers' immediate sphere of influence, the workers can't help but take immediate action to solve them. In essence, action becomes "unavoidable."

#5: MOM Considers the Whole Plant Picture

Finally, unlike MES applications, which focus strictly on plant and equipment measures, MOM looks at the whole plant picture, including the impact of the human factor on overall financial performance.

How important is the people factor? A recent study carried out by CDC Software on more than 100 food and beverage plants and more than 700 production lines revealed that only 18% of improvement opportunities were plant and equipment-related.

Yet 81% of the opportunities revolved around instituting or improving basic people-related issues and processes -- practices such as disciplined day-to-day review points, strict adherence to procedures and appropriate basic skills training -- all of which MOM helps facilitate by providing a simple, highly-visible real-time framework.

Tough Times Call for a Different Approach

As economic conditions continue to deteriorate, CPG companies must find opportunities not only to preserve and uncover profits, but to do so without major capital expenditures. To accomplish this, they must begin to migrate away from meaningless data capture and go back to the basics of running efficient plants.

MOM is a critical step to achieving this vision. For hundreds of CPG manufacturers worldwide, MOM provides the framework to avert small losses as they happen, manage runs more efficiently, instill confidence in the information being gathered and enable workers at every level to act decisively and positively on that information.

Mark Sutcliffe is the president of CDC Factory, a division of CDC Software --- an enterprise software company. www.cdcsoftware.com.

Das Internet wird 20

Das Internet wird 20

Am Anfang wollte keiner etwas davon wissen

Von Holger Schmidt

Tim Berners-Lee in jüngeren Jahren: 1994 an seinem Arbeitsplatz am Genfer Cer...

Tim Berners-Lee in jüngeren Jahren: 1994 an seinem Arbeitsplatz am Genfer Cern, der Großforschungseinrichtung für Teilchenphysik

13. März 2009 Eigentlich hat der Brite Tim Berners-Lee im Jahr 1989 nur eine Technik entwickeln wollen, um die Zusammenarbeit der Forscher des Großforschungsinstituts für Teilchenphysik (Cern) in Genf zu verbessern. Sein Projektantrag "Informationsmanagement: Ein Vorschlag", eingereicht am 13. März 1989, sollte sich dann aber als Grundstein des Internet entpuppen und das Kommunikations- und Informationsverhalten von einer Milliarde Menschen grundlegend verändern. Sein damaliger Vorgesetzter Mike Sendall ahnte vielleicht, dass eine großartige Idee auf seinem Schreibtisch lag, aber sicher war er sich nicht. "Vage, aber aufregend" schrieb Sendall auf den Projektantrag, in dessen Zentrum der "Hypertext" stand, der Informationen in einem Netz durch logische Verbindungen miteinander verknüpft.

Das System war gedacht, damit die Mitarbeiter am Cern auf die Forschungsergebnisse ihrer Kollegen in aller Welt zugreifen konnten. Berners-Lee lieferte auch gleich die Programmiersprache für Internetseiten mit, die auch heute noch geltende "Hypertext Markup Language" (HTML), sowie das Instrument für den Datentransfer, das "Hypertext Transfer Protocol", dessen Abkürzung "HTTP" ebenfalls bis heute als Synonym für die Datenübertragung im Internet steht.

Zunächst wollte niemand etwas davon wissen

Die Queen und der Erfinder: Elisabeth II. schüttelt dem Internet-Veteranen Si...

Die Queen und der Erfinder: Elisabeth II. schüttelt dem Internet-Veteranen Sir Tim Berners-Lee die Hand

Aber wie so oft bei bahnbrechenden Erfindungen wollte zunächst niemand etwas davon wissen. Erst 1991 richteten die ersten Universitäten Netzwerkrechner ein, um auf der Basis von Berners-Lees Ideen ihre Forschungsergebnisse auszutauschen. Das Internet wäre vielleicht nie aus der Forscherecke herausgekommen, hätte nicht der amerikanische Student Marc Andreessen 1993 am National Center for Supercomputing Application den ersten Browser namens Mosaic erfunden. Der Browser zeigte Internetseiten mit Grafiken an, so dass auch Nichttechniker in der Lage waren, mit einem einfachen Klick auf einen Hyperlink zur gewünschten Seite zu gelangen.

Das war der Durchbruch. Andreessen gründete das Unternehmen Netscape, und dessen Browser Navigator wurde zum Tor ins Internet, durch das Millionen Menschen strömten. Die Online-Dienste AOL und Compuserve öffneten ihre geschlossenen Online-Dienste ins Internet, und in Deutschland wurde aus dem Bildschirmtext der Telekom schließlich T-Online. E-Mail und der Zugang zum World Wide Web standen offen. Zwar mit meist langsamen Modems, aber immerhin.

Die Faszination der weltumspannenden Kommunikation und der plötzlich kinderleichten Informationsbeschaffung elektrisierte immer mehr Menschen. Mitte der neunziger Jahre bevölkerte sich das World Wide Web. Im Juli 1995 brachte Jeff Bezos das Internet-Kaufhaus Amazon online. und im September desselben Jahres wurde der Internet-Marktplatz Ebay von Pierre Omidyar gegründet. 1995 ist auch das Geburtsjahr von Yahoo, deren Gründer Jerry Yang und David Filo einfach die steigende Zahl der Internetseiten in Kategorien einordneten, damit die Nutzer den Überblick behalten konnten.

Einer gehörte sicher nicht zu den Internet-Visionären: Bill Gates

Zu den Internet-Visionären gehörte damals aber einer ganz sicher nicht: Bill Gates, der Gründer von Microsoft, unterschätzte das Internet kolossal. Er versuchte seine Nutzer im geschlossenen Dienst MSN zu halten. Zum Glück für Andreessen, denn sein Netscape Navigator verbreitete sich rasant, und dem ersten erfolgreichen Börsengang eines Internetunternehmens stand nichts mehr im Wege. Erst Jahre später sollte Gates die Bedeutung des Netzes erkennen und mit seinem Browser Internet Explorer die Verfolgung des Pioniers Andreessen aufnehmen.

1999 erfasste die Begeisterung für das Internet die Börse, um sie schon im Jahr 2000 wieder zu verlassen. Die Internet-Blase an der Börse war geplatzt, doch im Rückblick war die Phase nur eine Delle auf dem stürmischen Wachstumspfad des Datennetzes. Die Nutzerzahl stieg unvermindert an, und spätestens 2004 kehrte die Begeisterung auch an die Börse zurück. Die Suchmaschine Google legte ein sensationelles Börsendebut hin und machte das Internet auch auf dem Börsenparkett wieder salonfähig. Google hatte neben Amazon und Ebay das dritte große Geschäftsmodell gefunden, nämlich die Online-Werbung exakt an den Wünschen der Nutzer auszurichten. Das Modell, kleine Werbetexte (mit Hyperlinks) in maximal 0,5 Sekunden einzublenden, die Antworten auf den gerade eingetippten Suchbegriff liefern, hat Google im vergangenen Jahr mehr als 20 Milliarden Dollar Umsatz gebracht.

www-Erfinder Berners-Lee erhält 2004 den “Millennium Technology Prize“

www-Erfinder Berners-Lee erhält 2004 den "Millennium Technology Prize"

Wofür andere Branchen Jahrzehnte brauchen, vollzieht sich im Internet im Zeitraffer: Umwälzende Techniken wie Breitbandverbindungen oder mobile Geräte wie das iPhone lassen das Leben weiter im Netz pulsieren. Bald werden alle Telefongespräche über das Internet geführt; auch das Fernsehen verlagert sich mehr und mehr ins Netz.

Offenheit als der große neue Trend im Netz

Nach den Pionieren sind nun Unternehmen wie Facebook oder Twitter die neuen Stars im Web 2.0, das für viele das wahre Internet darstellt. Denn nun kommunizieren Millionen Menschen über das Netz miteinander. Der große Trend ist zurzeit die Offenheit: Internet-Unternehmen öffnen ihre Software. Millionen Entwickler entwickeln Zusatzprogramme oder bauen mit der frei verfügbaren Software eigene Internetseiten. Selbst Zeitungen wie die "New York Times" oder der britische "Guardian" öffnen sich, damit sich ihre Inhalte über das Netz verbreiten.

Diesen NeXT-Computer nutzte Berners-Lee im Jahr 1990. Es handelte sich um das...

Diesen NeXT-Computer nutzte Berners-Lee im Jahr 1990. Es handelte sich um das erste Gerät, das als Webserver und Netz-Browser fungierte

Und Berners-Lee? Nun, er ist heute Sir Tim Berners-Lee, mit vielen Orden dekoriert, arbeitet er heute an der dritten Generation des Netzes, dem semantischen Internet. Das soll die Inhalte wirklich verstehen und wissen, ob man auf einer Bank sitzt oder sein Geld dort anlegt. Nach dem Grundstein kann Berners-Lee damit die nächste Internet-Revolution auslösen.



Text: F.A.Z.
Bildmaterial: AP, dpa, REUTERS

Mapping a City's Rhythm

Friday, March 13, 2009

Mapping a City's Rhythm

A phone application reveals San Francisco hot spots and will soon show where certain "tribes" gather.

By Kate Greene

Outside vibe: Citysense is a downloadable application for the iPhone and BlackBerry. It provides a heat map of GPS activity in a major city. Here, San Francisco is shown with red patches that indicate higher activity. The application has also identified the user’s location (a solid yellow dot) and suggests popular destinations (yellow circles).
Credit: Sense Networks
MULTIMEDIA
video Animations showing how the technology works.

the course of any day, people congregate around different parts of a city. In the morning hours, workers commute downtown, while at lunchtime and in the evening, people disperse to eateries and bars.

While this sort of behavior is common knowledge, it hasn't been visible to the average person. Sense Networks, a startup based in New York, is now trying to bring this side of a city to life. Using cell-phone and taxi GPS data, the startup's software produces a heat map that shows activity at hot spots across a city. Currently, the service, called Citysense, only works in San Francisco, but it will launch in New York in the next few months.

On Wednesday, at the O'Reilly Emerging Technologies conference in San Jose, CA, Tony Jebara, chief scientist for Sense Networks and a professor at Columbia University, detailed plans of a forthcoming update to Citysense that shows not only where people are gathering in real time, but where people with similar behavioral patterns--students, tourists, or businesspeople, for instance--are congregating. A user downloads Citysense to her phone to view the map and can choose whether or not to allow the application to track her own location.

The idea, says Jebara, is that a person could travel to a new city, launch Citysense on her phone, and instantly get a feel for which neighborhoods she might want to spend the evening visiting. This information could also help her filter restaurant or bar suggestions from online recommendation services like Yelp. Equally important, from the company's business perspective, advertisers would have a better idea of where and when to advertise to certain groups of people.

Citysense, which has access to four million GPS sensors, currently offers simple statistics about a city, says Jebara. It shows, for instance, whether the overall activity in the city is above or below normal (Sense Networks' GPS data indicates that activity in San Francisco is down 34 percent since October) or whether a particular part of town has more or less activity than usual. But the next version of the software, due out in a couple of months, will help users dig more deeply into this data. It will reveal the movement of people with certain behavior patterns.

"It's like Facebook, but without the self-reporting," Jebara says, meaning that a user doesn't need to actively update her profile. "We want an honest social network where you're connected to someone because you colocate."

In other words, if you live in San Francisco and go to Starbucks at 4 P.M. a couple of times a week, you probably have some similarities with someone in New York who also visits Starbucks at around the same time. Knowing where a person in New York goes to dinner on a Friday night could help a visitor to the city make a better restaurant choice, Jebara says.

As smart phones with GPS sensors become more popular, companies and researchers have clamored to make sense of all the data that this can reveal. Sense Networks is a part of a research trend known as reality mining, pioneered by Alex Pentland of MIT, who is a cofounder of Sense Networks. Another example of reality mining is a research project at Intel that uses cell phones to determine whether a person is the hub of a social network or at the periphery, based on her tone of voice and the amount of time she talks.

Jebara is aware that the idea of tracking people's movements makes some people uncomfortable, but he insists that the data used is stripped of all identifying information. In addition, anyone who uses Citysense must first agree to let the system log her position. A user can also, at any time, delete her data from the Sense Networks database, Jebara says.

Part of Sense Networks' business plan involves providing GPS data about city activity to advertisers, Jebara says. But again, this does not mean revealing an individual's whereabouts--just where certain types of people congregate and when. For instance, Sense Networks' data-analysis algorithms may show that a particular demographic heads to bars downtown between 6 and 9 P.M. on weekdays. Advertisers could then tailor ads on a billboard screen to that specific crowd.

So far, Jebara says, Sense Networks has categorized 20 types, or "tribes," of people in cities, including "young and edgy," "business traveler," "weekend mole," and "homebody." These tribes are determined using three types of data: a person's "flow," or movements around a city; publicly available data concerning the company addresses in a city; and demographic data collected by the U.S. Census Bureau. If a person spends the evening in a certain neighborhood, it's more likely that she lives in that neighborhood and shares some of its demographic traits.

By analyzing these types of data, engineers at Sense Networks can determine the probability that a user will visit a certain type of location, like a coffee shop, at any time. Within a couple of weeks, says Jebara, the matrix provides a reliable probability of the type of place--not the exact place or location--that a person will be at any given hour in a week. The probability is constantly updated, but in general, says Jebara, most people's behavior does not vary dramatically from day to day.

Sense Networks is exploring what GPS data can reveal about behavior, says Eric Paulos, a professor of computer science at Carnegie Mellon. "It's interesting to see things like this, [something] that was just research a few years ago, coming to the market," he adds. Paulos says it will be important to make sure that people are aware of what data is being used and how, but he predicts that more and more companies are going to find ways to make use of the digital bread crumbs we leave behind. "It's going to happen," he says.

Thursday, March 12, 2009

Dead-cat bounce?

CFO optimism index

Dead-cat bounce?

Mar 11th 2009
From Economist.com

Chief financial officers around the world are a bit less gloomy


GLIMMER of recovery or a dead-cat bounce? Confidence in economic prospects has picked up slightly among chief financial officers around the world—although pessimists still far outnumber optimists. This is according to the latest quarterly poll of over 1,000 CFOs, conducted in late February by Duke University in America, Tilburg University in the Netherlands and CFO, a sister publication toThe Economist. But whereas finance chiefs may be marginally less dour than in previous quarters, they are continuing to slash earnings forecasts and are speeding up plans for layoffs and spending cuts. Most CFOs in America, Europe and Asia expect to freeze hiring and wages over the next 12 months.

Shutterstock

Wednesday, March 11, 2009

The new normal

The new normal

The business landscape has changed fundamentally; tomorrow’s environment will be different, but no less rich in possibilities for those who are prepared.

This short essay by McKinsey’s worldwide managing director, Ian Davis, is a Conversation Starter, one in a series of invited opinions on topical issues. Read what the author has to say, then tell us what you think the new normal will look like.

It is increasingly clear that the current downturn is fundamentally different from recessions of recent decades. We are experiencing not merely another turn of the business cycle, but a restructuring of the economic order.

For some organizations, near-term survival is the only agenda item. Others are peering through the fog of uncertainty, thinking about how to position themselves once the crisis has passed and things return to normal. The question is, “What will normal look like?” While no one can say how long the crisis will last, what we find on the other side will not look like the normal of recent years. The new normal will be shaped by a confluence of powerful forces—some arising directly from the financial crisis and some that were at work long before it began.

Obviously, there will be significantly less financial leverage in the system. But it is important to realize that the rise in leverage leading up to the crisis had two sources. The first was a legitimate increase in debt due to financial innovation—new instruments and ways of doing business that reduced risk and added value to the economy. The second was a credit bubble fueled by misaligned incentives, irresponsible risk taking, lax oversight, and fraud. Where the former ends and the latter begins is the multitrillion dollar question, but it is clear that the future will reveal significantly lower levels of leverage (and higher prices for risk) than we had come to expect. Business models that rely on high leverage will suffer reduced returns. Companies that boost returns to equity the old fashioned way—through real productivity gains—will be rewarded.

Another defining feature of the new normal will be an expanded role for government. In the 1930s, during the Great Depression, the Roosevelt administration permanently redefined the role of government in the US financial system. All signs point to an equally significant regulatory restructuring to come. Some will welcome this, on the grounds that modernization of the regulatory system was clearly overdue. Others will view the changes as unwanted political interference. Either way, the reality is that around the world governments will be calling the shots in sectors (such as debt insurance) that were once only lightly regulated. They will also be demanding new levels of transparency and disclosure for investment vehicles such as hedge funds and getting involved in decisions that were once the sole province of corporate boards, including executive compensation.

While the financial-services industry will be most directly affected, the impact of government’s increased role will be widespread: there is a risk of a new era of financial protectionism. A good outcome of the crisis would be greater global financial coordination and transparency. A bad outcome would be protectionist policies that make it harder for companies to move capital to the most productive places and that dampen economic growth, particularly in the developing world. Companies need to prepare for such an eventuality—even as they work to avert it.

These two forces—less leverage and more government—arise directly from the financial crisis, but there are others that were already at work and that have been strengthened by recent events. For example, it was clear before the crisis began that US consumption could not continue to be the engine for global growth. Consumption depends on income growth, and US income growth since 1985 had been boosted by a series of one-time factors—such as the entry of women into the workforce, an increase in the number of college graduates—that have now played themselves out. Moreover, although the peak spending years of the baby boom generation helped boost consumption in the ’80s and ’90s, as boomers age and begin to live off of retirement savings that were too small even before housing and stock market wealth evaporated, consumption levels will fall.

Companies seeking high rates of income and consumption growth will increasingly look to Asia. The fundamental drivers of Asian growth—productivity gains, technology adoption, and cultural and institutional changes—did not halt as a result of the 1997 Asian financial crisis. And Asian economies—though they have rapidly deteriorated in recent months—are unlikely to be stopped by this one. The big unknown is whether the temptation to blame Western-style capitalism for current troubles will lead to backlash and self-destructive policies. If this can be avoided, the world’s economic center of gravity will continue to shift eastward.

Through it all, technological innovation will continue, and the value of increasing human knowledge will remain undiminished. For talented contrarians and technologists, the next few years may prove especially fruitful as investors looking for high-risk, high-reward opportunities shift their attention from financial engineering to genetic engineering, software, and clean energy.

This much is certain: when we finally enter into the post-crisis period, the business and economic context will not have returned to its pre-crisis state. Executives preparing their organizations to succeed in the new normal must focus on what has changed and what remains basically the same for their customers, companies, and industries. The result will be an environment that, while different from the past, is no less rich in possibilities for those who are prepared. Q logo

About the Author

Ian Davis is the worldwide managing director of McKinsey & Company.

Heightened Complexities

Heightened Complexities
Recession creates unpredictable customer and supplier patterns.

Shnouws Semiconductor has never had an easy time of clearly identifying short- or long-term demand and triggering our supply chain based on that demand. Other manufacturers may have the luxury of classic A, B, and C product categories, but at Shnouws we have the entire alphabet. We design and produce nearly 4,000 different electronic SKUs (radio frequency circuits, oscillators, controllers, sensors, semiconductors, etc.) for nearly 11,000 customers worldwide. Nearly half of our business is contract manufacturing.

Managing our complex supply chain and outsourcing relationships — not unlike that of many electronics companies today, including our competitors — has always been a mix of science and art when trying to balance demand and supply. Within the current recession, though, we've gone from complex to volatile to absolutely unpredictable: customer orders disappear overnight; outsourcing contracts (for which we've already invested in material and assets) are cancelled; suppliers can't get credit to buy raw materials; other suppliers fail without warning; customers go out of business; and yet some of our markets (green technologies) experience huge growth.

Shnouws needs to develop a more cohesive approach to identifying and segregating our demand and then quickly sharing those signals throughout our diverse vendor base — even amid the current chaos. We must be able to predict demand more clearly, more efficiently aggregate the design and manufacture of our products, and more productively manage our supply chain. We're not looking for a magic potion, just some better ideas, processes, and tools.

Tuesday, March 10, 2009

Volkswagen: „Durchbruch für RFID in der Materiallogistik

Volkswagen: „Durchbruch für RFID in der Materiallogistik

Wolfsburg. Der Volkswagen-Konzern will seine Materiallogistik auf den Einsatz „modernster Informationstechnologie“ ausrichten. Alleine dadurch soll im Wareneingang der manuelle Aufwand um bis zu 80 Prozent verringert werden, teilte der Automobilhersteller kürzlich mit. VW schaffe in der zentralen Logistikhalle am Konzernsitz in Wolfsburg die Voraussetzungen.

Vorausgegangen war ein einjähriges Pilotprojekt von Volkswagen und IBM, bei dem die RFID-Technik (Radio Frequency Identification) gemeinsam mit Lieferanten erprobt wurde. Für das Pilotprojekt im Werk Wolfsburg rüstete Volkswagen 3000 Spezialbehälter mit RFID-Funketiketten aus. Zum Beispiel wurden Schiebedächer für den neuen Golf erfasst. Antennen an Halleneinfahrten, Handlesegeräten und Gabelstaplern identifizierten laut VW zuverlässig Behälter und Inhalt. Dazu Thomas Zernechel, Leiter der Konzernlogistik: „Die von Volkswagen angewandte Technik optimiert den Wareneingang zu einem einzigen Schritt: So werden vier Paletten gleichzeitig auf einem Gabelstapler erkannt und automatisch im Lagerbestand gebucht. Darüber hinaus wurde die Technik so weit verfeinert, dass auch Metallbehälter, die im Allgemeinen den Funkverkehr stören, erfasst werden können.“ Gemeinsam habe man, so VW-Projektleiter Marc Wenzel, einen Durchbruch für die Alltagstauglichkeit der RFID-Technik im Automobilbau und darüber hinaus erreicht.

„Unser langfristiges Ziel ist eine durchgängige und papierbeleglose Fertigungs- und Logistikkette im gesamten Konzern“, sagte Klaus Hardy Mühleck, Leiter der Konzern-IT bei Volkswagen. Das Pilotprojekt habe gezeigt, „wie wir die innovative RFID-Technik zuverlässig und kostengünstig in unsere Geschäftsabläufe integrieren können“. Kurt Rindle, bei IBM für das Thema RFID verantwortlich, betonte: „Das Pilotprojekt ist wegweisend: Es ist das erste weltweit, das einen Materialfluss zwischen Lieferanten und Automobilhersteller mit RFID-Technik im Tagesablauf verwirklicht hat.“ (pi)

Video Process Monitoring

Video Process Monitoring


By Steve Rubin, President & CEO
Longwatch, Inc.

There are three primary ways to enjoy a baseball game: view it in person, watch it on TV or listen to it on the radio. We’ve probably all used each of these methods at one time or another; some fans even combine a couple of methods such as bringing a radio to the game to hear the announcers describe what they are seeing. Maybe they don’t trust their own interpretation of what they see, and need to have it explained. That’s like watching readouts on an HMI screen—what’s really happening out there? Why not cut to the chase and see what’s going on?

Today, there are two primary ways to monitor a process: in person, by walking around the plant, or from a control room via an HMI screen. But there is a third way: Watching the process via camera monitors that put images directly on the HMI screen (Figure 1) or onto a cell phone or PDA. That way, you don’t have to wonder what’s happening at the process unit. You can see it.

Camera monitoring of process control is relatively rare. Granted, there are limited applications, such as a flare stack camera or a camera in the shipping area. Why hasn’t video been applied more widely in process monitoring applications and what additional value could be delivered?

A camera image on an HMI screen can be used to verify that certain operations are actually being performed—such as an operator adding ingredients to a batch reactor. The video “snapshot” can be stored on disk, along with batch production data, as a visual record . Like a baseball game, it’s an instant replay. Unlike a baseball game, you can watch it as many times as you want.

In water and wastewater treatment plants, where tanks, lagoons, pumping stations and other equipment is spread over a wide area, camera images can make sure that valves open and close, lagoons are at the right level , and a child hasn’t fallen into the lagoon. The same stream of video has multiple purposes, much like a multi-variable sensor. Video not only provides process information, it can mitigate liability and provide security. With video, you know whether to take a shotgun or a wrench to the field to fix a stuck valve.

A camera image can be used to help diagnose problems in the field. Wouldn’t it be nice to be able to actually see what the process is doing without having to walk out to a distillation column in a Texas summer? I’ve been to plenty of plants where the sensors have exhibited problems which confounded the operators. Open circuits on thermocouples, valves that stick, pressure transmitters that fail, and so on. In some instances, the engineers solve this problem by putting extra code in the control system, or install extra sensors, so that problems can be diagnosed remotely. Imagine how easy it would be to simply look at a camera image of the valve while it was operating, to see if it was sticking.

For example, I’ve seen a situation where instrumentation told the operator that pumps were off-loading oil from a barge, only to find out later that the coupling was off and the oil didn’t make it into the plant…it made it into the river! Another situation involved a cryogenic pump that froze up on a humid day and didn’t work properly. These are situations that an operator could see and understand what was happening, if he or she had video.

Watching Instead of Visualizing
Whenever I’m visiting a plant, I’m usually given a tour of the facility, shown the process equipment and the instrumentation, and then brought to the control room. There, the operators show me the HMI displays that are designed to mimic the layout, behavior and status of the equipment. It’s up to the operator to imagine, in his or her mind’s eye, what’s actually happening in the plant, given the indications from the instrumentation.

This is easier with today’s HMIs than with the traditional dial, chart recorder and annunciators of 10-20 years ago, but this method relies on the accuracy and timeliness of the instrumentation readings, as well as the fidelity of the HMI’s mimic display.

Some process measurements are easier to see than they are to instrument (the flare camera comes to mind…a quick look tells you that it’s lit and if it is smoking). And a camera image can tell you if a process vessel is overflowing, a line is leaking, or if steam is escaping, Figure 2. After all, “a picture’s worth a thousand words.”

“Attention by exception” is really how most process plants operate. We periodically observe indications, but only take action when an alarm condition occurs. The alarm may be an “alert” about an event, or it may be an actual transition requiring remedial action.

If a video image of the process unit appeared on the screen when an exception occurred, the operator would be able to immediately see what was happening. This method of operation is much better than the standard “surveillance” approach of watching closed-circuit TV monitors, waiting to pick up visual cues to changes. Studies have shown that humans will lose interest and be unable to detect changes shown on screens if they are forced to stare for more than 20 minutes.

How many times have you needed to “babysit” an intermittent problem in the field or the plant? Video is a tool that can significantly boost efficiency. A video system can be configured to watch equipment and take video snapshots when certain conditions occur—such as that intermittent problem.

With proper configuration, a “before” and “after” clip can be generated to help troubleshoot the cause of the problem. In other words, the camera watches the process continuously, and stores video images internally; when an alarm condition occurs, it can transmit a video snapshot of the unit for, say, five minutes before the alarm occurred, and then continue transmitting real-time images for as long as necessary. All the images can be stored for playback and analyzed as many times as necessary.

Transmitting Data to the Control System
If video can help us discern more about the real conditions, help us see into remote areas, and bring us more information faster…then why has video been so slow to be adopted in process applications? Until recently, all camera technology was analog, and thus it required its own network for operation. Many traditional video systems were originally designed for casinos, parking garages and shopping malls. It was extremely difficult to connect these “closed circuit TVs” to an HIM screen.

But many plants have a much more suitable network, one that’s almost hack-proof, paid for, and reliable: the instrumentation network for the SCADA/HMI system, Figure 3. This network was designed to handle digital communications between controllers and computers, and it can easily accommodate video images, too.

Some users are reluctant to use the SCADA (or “level one”) network for anything other than process control communications. In some respects, that makes sense: when there’s a plant upset, you want to make sure that control messages (for example: turn off a motor) are delivered on-time and reliably. But there are ways to embed chunks of video clips into “envelopes” that simply pass through the SCADA network. These messages travel at a lower priority than the control messages, so that it might take longer for the entire video clip to show up at the HMI. But it’s better than seeing nothing, and better than driving to a remote process unit.

New camera systems use the bandwidth available in most industrial networks to report video information to the host system, handle real-time diagnostics and the like. It’s easy to “drop” cameras on these networks, especially if two networks are being used in the plant: one for instrumentation/control and the other for operations data/information .

While digital cameras will quickly saturate most networks, proper software provides a means of controlling the network traffic, in an orderly fashion, so that video and control messages can co-exist.

Adding a video camera to most industrial networks simply requires a local “Ethernet port” or a drop-in communications module, similar to adding another smart sensor. The camera plugs in and can be configured as just another sensor. However, configuration and the integration with a SCADA/HMI system can be tricky if you don’t have good application software.

Dealing With Data
Digital cameras can become the electronic equivalent of purple loosestrife (prevalent in the Charles River basin around Boston): Nice to look at, but overloads the environment quickly. In fact, just four cameras transmitting images could use 55MB of bandwidth every minute! Storing video from twelve megapixel cameras for 10 days would take 2.4 terabytes of storage.

How can we deal with the issues of:

  • Mitigating impact and traffic on the network,
  • Deterministic response of the network,
  • Reasonable storage requirements and fast access to video-of-interest, and
  • Integrating easily with the operator’s standard console

    in a way that’s consistent with good engineering and operating practices for industries?

    The solution lies with software specifically designed for video applications in factory automation. Essentially, the software asks for video images to be transmitted over the factory network only when it needs the data—such as when an alarm occurs or a step occurs in a batch procedure.

    The software converts a computer into a comprehensive digital video recorder (DVR) that:
  • Continuously collects and archives high-resolution video from a multitude of cameras
  • Automatically edits the video stream into before-and-after “clips” when an “event” occurs
  • Sends that video clip, along with an alarm message, to the HMI or SCADA system for further handling by the user
  • Automatically latches the alarm so that the user is always notified (and must acknowledge) and no alarms are lost
  • Automatically stores and forwards messages if the network is temporarily unavailable
  • Communicates efficiently over almost any network provided, from gigabit fiber networks, to mid-speed wireless and wired systems, all the way down to 9600 baud wireless and telephone lines.

    Newer industrial video monitoring systems have all these features. Installation is simple, requiring only plug-in connections to the existing network. The software to process video data can be embedded in most commercial HMI/SCADA software systems, or interfaced via OPC. And the commands to take video snapshots can be defined by ISA-88 recipes, or sent by any major distributed control system, at various stages in a batch or continuous process, or in response to an alarm.

    A key need is to edit the videos down to the clips that are important, and store those compressed clips in a way that makes them easy to retrieve. A relational database serves this need well, especially in manufacturing and process control applications.

    How do you interface to the HMI? Fortunately, the industry has developed standards such as OPC, HTTP and even Modbus. These enable application programs to share data structures and pass commands between applications. In the case of Longwatch, we use OPC so that the operator can, from an HMI (like InTouch or iFIX) send a command to acknowledge a video alarm, create an event clip “on the fly,” or even go into “live streaming” mode to take an immediate look into the field.

    Video has the opportunity to be applied to process control applications in ways that were just not possible a few years ago. Better yet, there is technology that enables video to be transmitted over very long distances, on existing networks, so that “blind spots” in the operation are eliminated at very low cost.

    Video images can also be transmitted to cell phones or PDAs, so that engineers can diagnose problems from home in the middle of the night. Or an engineer can stand next to the process unit, watch the recorded video snapshot of a problem, and try to figure out what really happened . Because the video data is available as an historical record, images and process data can be sent to an outside expert for an analysis.

    Having video available on HMI screens helps operators see what’s going on at a process unit, verifies and provides a record that events occurred, and helps operators and engineers diagnose problems from afar.

    FIGURE CAPTIONS


    Figure 1: Video images on the HMI screen at the Littleton water department in Littleton, MA, allow operators to see what’s happening in the well house and surrounding area.


    Figure 2: An operator at the water plant in Madison, MA, can quickly glance at video images from important areas of the process.


    Figure 3: Diagram shows how cameras transmit data over standard industrial networks to a video processor. The video processor stores the data in a relational database and makes images available to an HMI/SCADA system, DCS, PLC, process historians, web browsers, cellphones and PDAs.

    About Longwatch
    Longwatch, Inc. was founded by industrial automation and software veterans with the goal of simplifying video delivery over existing SCADA, HMI and distributed control networks. The result is the Longwatch Surveillance System™, a portfolio of products that enables SCADA system users to view events and easily verify alarms at local and remote sites using both legacy and new networking infrastructures. The system integrates video and system alarms on the same display for fast, reliable operation and decision-making. Visit the website
  • Monday, March 09, 2009

    A Mobile Mesh Network Goes Nuclear

    Monday, March 09, 2009

    A Mobile Mesh Network Goes Nuclear

    Backpacks that detect nuclear material form a wireless mesh network.

    By Kristina Grifantini

    Nuclear option: This backpack (top) contains sensors that detect radioactive materials. Coupled with a mesh network, it automatically builds a map showing hazardous materials in the area. A mesh node wirelessly transmits results from nuclear sensors to handheld devices (bottom), such as a wrist-worn display or a PDA.
    Credit: Joseph Tumminello

    New mesh-networking technology will allow soldiers to more quickly search an area for signs of nuclear contamination. A company called Rajant has combined mesh radio transmitters with radiation-sensing backpacks to create a system that automatically sets up a communications mesh and displays a map of radiation across a region.

    Mesh networking offers a fast, cheap way to construct a communications network. Instead of sending messages via a central command point, information hops from one node to another until it reaches its destination. A number of companies use static mesh networks for tasks such as traffic monitoring andenvironmental sensing, but creating reliable mobile mesh networks is more challenging. This is because each node must constantly track its moving neighbors to figure out how best to pass on a message while also preserving energy and bandwidth. Rajant addresses this problem by having nodes monitor just a few of their closest neighbors at any time.

    For the detection system, Rajant's communication nodes, called BreadCrumbs, are connected to backpack sensors that detect radioactive material including plutonium and enriched uranium; the sensors are made by a company called Nucsafe. Team members wearing the sensors branch out and perform reconnaissance of an area. Data from each node hops back to a main computer, which builds a map showing the position of each node and its radiation data while individual users can see a map of their pack's results on a wrist-worn display or a laptop. Rajant presented the newest version of the system at the 2009 Soldier Technology Conference in Florida last month.

    Each BreadCrumb can communicate with a peer that is up to five miles away, says Glenn Booth, vice president of marketing for Rajant. He adds that the mobile network has enough bandwidth to transmit a video stream or VoIP, even with hundreds of nodes all moving in different directions. The company already sells the wireless technology to mining companies and the military, and a single BreadCrumb device costs up to about $5,000.

    "Building a reliable network is difficult because of mutual interference between radio nodes," says Dipankar Raychaudhuri, who works on mobile mesh networks for vehicles at Rutgers. Networks work with a certain density of radios, he says, but "can fall apart if radios move out of range in uncontrolled settings."

    "The problem with completely mobile networks is that there is no guarantee of connectivity," adds Nader Moayeri, who works on mesh and ad hoc networks for the National Institute of Standards and Technology. "One other challenge is the capacity of the network," says Moayeri. Normally, as each node moves, it has to recalculate the best routes for a message. "That's considerable overhead," he says. "It eats up bandwidth you could be using for sending actual data."

    Another way to improve a mobile network is to use several radio transmitters. For example, a company called MeshDynamics has developed transmitters that use two radios to send and receive data instead of just one.

    How to Share without Spilling the Beans

    Monday, March 02, 2009

    How to Share without Spilling the Beans

    A new protocol aims to protect privacy while allowing organizations to share valuable information.

    By Erica Naone

    Credit: Technology Review

    Last fall, two of Israel's leading political parties, Likud and Kadima, became embroiled in a dispute when, in a close primary race, it was alleged that some voters had illegally registered to cast their ballots twice. The parties struggled to find a way to resolve the dispute, since neither wanted to turn over its list of members to the other. Finally, the parties agreed to give their lists to the attorney general, who would compare them confidentially.

    This sort of problem is increasingly encountered by large organizations, including government agencies and big businesses, says Andrew Yehuda Lindell, an assistant professor of computer science at Israel'sBar-Ilan University and chief cryptographer at Aladdin Knowledge Systems, in Petach Tikva, Israel. He also calls the solution devised by Likud and Kadima "outrageous," adding that handing over party-membership details to the government is "almost the same as revoking vote confidentiality for these citizens."

    Lindell is one of a community of researchers studying ways to share this sort of information without exposing private details. Cryptographers have been working on solutions since the 1980s, and as more data is collected about individuals, Lindell says that it becomes increasingly important to find ways to protect data while also allowing it to be compared. Recently, he presented a cryptographic protocol that uses smart cards to solve the problem.

    To use Lindell's new protocol, the first party ("Alice" in cryptography speak) would create a key with which both parties could encrypt their data. The key would be stored on a special kind of secure smart card. Alice would then hand over the smart card to the second party in the scenario (known as "Bob"), and both parties would use the key to encrypt their respective databases. Next Alice sends her encrypted database to Bob.

    The contents of Alice's encrypted database cannot be read by Bob, but he can see where it matches entries in the encrypted version of his own database. In this way, Bob can see what information both he and Alice share. For extra protection, Bob would only have a limited amount of time to use the secret key on the smart card because it is deleted remotely by Alice, using a special messaging protocol.

    Lindell says that, in tests, it took about nine minutes to compare 10,000 records. The same system can also be used to search a database without exposing either the database or the nature of the search.

    Lindell says that his protocol can be mathematically proven to work efficiently and securely, but he admits that there is one weak spot. "I'm introducing another avenue of attack," he says, referring to the smart card. Bob could try to pull the secret key from the smart card in order to decrypt Alice's database and read its contents. However, Lindell notes that high-end smart cards have strong protections and can be designed to self-destruct if the chip is compromised. "Smart cards are not perfect," Lindell acknowledges, but he says that competing schemes have their own weaknesses.

    By introducing a smart card, Lindell's system requires far less computing resources to protect people's private information, says Benny Pinkas, a professor of computer science at the University of Haifa, in Israel, who has also worked on the problem. "In my view, the trade-off is reasonable for all but the very most sensitive applications," he adds.

    Ari Juels, chief scientist at RSA Laboratories, agrees that some sort of hardware is needed for this kind of information-sharing scheme. However, he is "somewhat skeptical" about the smart-card approach. For one thing, he says, the card essentially serves as a trusted third party, so it could be difficult to find a manufacturer that both organizations trust completely. Even then, "assuming that a smart card is secure against an individual or modestly funded organization may be reasonable," Juels says, "but not that it's secure against a highly resourced one, like a national-intelligence agency."

    Michael Zimmer, an assistant professor at the University of Wisconsin-Milwaukee who studies privacy and surveillance, says that Lindell is working on an important problem: "There can be some great benefits to data mining and the comparison of databases, and if we can arrive at methods to do this in privacy-protecting ways, that's a good thing." But he believes that developing secure ways of sharing information might encourage organizations to share even more data, raising new privacy concerns.

    Currently, Lindell's protocol can only be used to make certain types of comparisons, but he argues that it could still prove useful. "Let's give [organizations] only what they need, and, when we do have solutions already, let's at least start somewhere and limit what they could be learning," he says.