One way to think about what Streams computing can do for your data is to think back to innovations we've seen in the past. If you look at the industrial revolution, for example, we got very good at making things. We got better and better hardware, better and better machines, then came the invention of electricity, the invention of steam engines and so forth before that. But what really broke loose the manufacturing, and made it the extremely efficient capability we have today, was assembly line technology: realizing that you can do things in a continuous way and have multiple steps along the path. And so that's really what Stream computing is bringing to you, multiple steps of processing along the path as the data's flowing through. This chart shows just this: data comes in element by element, flowing through a set of operations, like in an assembly line. For example, perhaps somebody's putting the wheels on a car, maybe somebody's bolting down the hood; these are individual operations. You may be doing filtering, you may be doing aggregation, you may be scoring against a model that's been built in BigInsights, but it's element by element, continuously passing through the infrastructure. **** The point here isn’t to talk about this technology yet from a product perspective, but I like to frame velocity and note that these are real IBM examples today. **** In Streams, in the listening step we actually are processing each and every one of them. So whatever processing has to be done or would've been done by putting it into storage - into the database and so forth, we're doing that on the wire. Today, the data is (optionally) going on the disk and available for the rest of the back end infrastructure and (BI) and so forth, but it's executing the processing while it's still on the wire, which gives you get tremendous efficiencies there for problems that are suited for streaming since you aren't going through the extra steps of going to disk and coming back out and maybe going to multiple steps along the stage of processing. Instead, doing it all in the continuous pipeline methodology. Streams is all about analyzing Data in Motion and I ’ m going to talk more about this in a moment, but when we talked about the velocity of data earlier in this presentation, you may have asked yourself how can you analyze this data very, very quickly? And in the IBM Big Data technology stack (again, which we ’ ll talk about) we have an integrated technology for this called InfoSphere Streams. +CLICK+ +CLICK+ In this example, an IPDR, it ’ s like a CDR (a call detail record for the internet), so being able to analyze half a million of these a second, over six billion of these a day, four petabytes a year of IPDRs, sustaining one gigabyte per second throughput is going to provide you with a lot of analytical power. And why would you want to analyze this kind of stuff? I like to call this data exhaust and it falls under a hot Big Data topic called Log Analysis. If you had the ability to store and analyze this data you could develop a corpus of information to build a more resilient network and trouble shooting easier and the opportunity to gain much more customer insight about what items they browse over, and what they buy, and so on . Now, bringing the thing you learn about your network to analyze these IPDRs as they hit the switch gives you the ability to figure out if things are about to turn bad before they do. +CLICK+ Finally, consider the fact that data that isn’t stored to disk doesn’t have to undergo retention policies, this is a tremendous opportunity for business to gain insight into the data without taking on the expense or requirements to store the data for a specific retention period.