This document provides an overview and agenda for a presentation on creating a "Hello World" program with Cisco's Data in Motion (DMo) software. The presentation introduces DMo and how it can manage and analyze data at the edge. It discusses how DMo represents a paradigm shift with edge intelligence and provides examples of railway and utilities use cases. The document explains DMo's programming model involving dynamic data definitions, patterns, conditions, and actions. It also demonstrates how to set up a DMo instance, create timer and event rules to read a light sensor and control an LED based on the sensor readings.
5. Data in Motion and IoT
• The Internet of Things (IoT) is a computing concept that describes a future
where everyday physical objects will be connected to the Internet and be able to
identify themselves to other devices.
• Cisco Data in Motion (DMo) is a software technology that provides data
management and first-order analysis at the edge.
• Cisco Data in Motion provides mechanisms to capture data and control flows
within the network translating data into information and ultimately into knowledge
for use by higher order applications within a system.
6. Paradigm Shift with Edge Intelligence
Unified Platform
Network Compute Storage
CLOUD CLOUDEDGE
STORE ANALYZE ACT NOTIFY
15. Data In Motion Model (details)
• Context: Sandbox for an
application with separate URP
allowing for the creation and
interaction of multiple data analysis
operations.
• Dynamic Data Definitions (aka
D3): A set of patterns, rules, and
actions for a specific analysis task.
Multiple D3s may exist within a
single context and reference each
other for compound or recursive
analysis.
Context
D3 D3 D3
D3 D3 D3
D3
16. The D3 Model (details)
• Dynamic Data Definition involve the
relationship of three simple concepts
• Pattern
• Condition
• Action
D3
Pattern
Protocol Patterns
Condition
Content (aka Payload)
Parameters – Output of
Operations
Action
Event (Condition Met)
Call Another D3 within
Context
Send to Dynamic Data
Stream
Dynamic Data Request
Timer
Call Another D3 within
Context
Send to Dynamic Data
Stream
Dynamic Data Request
17. The D3 Model (details)
• Dynamic Data Definition involve the
relationship of three simple concepts
• Pattern
• Condition
• Action
• Ultimately this breaks down into:
• Meta information
• Network definition
• Application to monitor
• Action(s) to take
D3
Meta (1)
D3_Id, Context_ID, Processing Method (Timer, Cache)
Network (01)
Filterby: (protocol {tcp/ip, UDP}
Source/Dest IP, Source/Dest Port (multiple ANDed)
Decode: (variable A=first 8 Bits, var B=next 16 bits, etc….)
Application (01)
Filterby:
Protocol: http
Field: content-type:json, etc.
Content
Example: variable Temperature>56
Action (>1)
Type: Primitive
payload
Header
Type: Procedure
FetchData
Gpsupdate()
syslog
Type: Timed
FetchData
Gpsupdate()
syslog
18. The D3 Model (details)
Sensors
Cloud
DataCenter
D3
Meta (1)
D3_Id, Context_ID, Processing Method (Timer, Cache)
Network (01)
Filterby: (protocol {tcp/ip, UDP}
Source/Dest IP, Source/Dest Port (multiple ANDed)
Decode: (variable A=first 8 Bits, var B=next 16 bits, etc….)
Application (01)
Filterby:
Protocol: http
Field: content-type:json, etc.
Content
Example: variable Temperature>56
Action (>1)
Type: Primitive
payload
Header
Type: Procedure
FetchData
Gpsupdate()
syslog
Type: Timed
FetchData
Gpsupdate()
syslog
{JSON : {
Rules can express:
Predicates and Filters
Data / Information
conversion
Summarization
Pattern Matching
Categorization &
Classification
Event Trigger analysis
Notifications
}}
• Putting it Together
19. www.slideshare.net/kartben/whats-new-at-eclipse-iot-eclipsecon-2014
Data in Motion API as an Open Source Project
• Krikkit initiative originates from Cisco
Data in Motion project
Promotion of Data in Motion products and
Data in Motion proliferation across industry
Maintain Leadership in industry for IoT efforts
IoT does not have many Standards and Open
Source is way to accelerate IoT innovations
with Cisco products
Krikkit is the public API for Data in Motion
»http://eclipse.org/proposals/technology.krikkit/
26. Login To DMo
Login Page
Point your browser to the IP address
unique to your workstation
http://[your unique IP]:8000
Requires an IP address of the DMo
instance, the port number, a context and
the associated password:
IP: 127.0.0.1
Port: 443
Context Name: dmolab
Context Password: dmo123
27. Clean Start
An Empty Context
We need to make sure there are no
pre-existing rules.
A Programmed Context
if your screen looks similar to
the screen below, please click
the Trash Can Icon and Delete
rules.
28. Create a Timer Rule
Polling a Sensor
Most Real life sensors are Asynchronous,
as a result, we need to create a timer rule
that will poll the sensors to retreive the
Data.
A timer rule is a process that runs
periodically (units in millseconds and
above) and estabishes a connection to the
sensor.
29. Verifying the Timer Rule
The JSON Payload
if you want to see what the resulting
JSON code would look like, you can
press the 'see JSON' button.
30. Create an Event Rule
Turn LED ON
Now that we have Setup a Timer Rule and we are
polling the Sensors for Data.
We will need to create an Event Rule to apply a filter to
the data coming back from the sensor and take action
depending on the data value.
• Filter Data
• Turn LED OFF [output Port0] when light Sensor
[input Port1] is Dimmed [Value < 100]
31. Create another Event Rule
Turn LED OFF
Now that we have turned the LED ON when Dimming
the light on the sensor, we would like to turn the LED
back off when the Light Sensor is lit
• Filter Data
• Turn LED OFF [output Port0] when light Sensor
[input Port1] is not Dimmed [Value > 100]
32. Bonus Lab
• Setup a Rule that will turn the LED ON when Pressing on the Pressure Sensor
[Pressure Threshold > 10]
• Setup another Rule that will turn the LED OFF when depressing the Pressure
Sensor [Pressure Threshold < 10]
• Useful Information
• LED is on [Output Port 1]
• LED value 0 Turns OFF
• LED value 1 Turns ON
• Pressure Sensor is on [Input Port 1]
This new fog layer will create a paradigm shift in the network infrastructure. Today, businesses deploy three disparate devices for their networking, computing, and storing. Fog introduces a concept to combine all those devices into a single unified platform—instead of having to manage three things, companies will just worry about one.
Fog also shifts how data is processed. Today, data is first transmitted to the cloud and stored. From there, it’s analyzed and commands are sent to act upon that information, then operators are notified. Fog helps overcome the costly need to constantly move data around and allows analysis and notification to occur before the critical information in stored to meet compliance and regulation policies.
We believe this is all critical in accelerating the Internet of Things and today we’re excited to share with you our role in making this reality.
Whether it’s a passenger train in a bustling city or a freight train slithering through the mountainside, news of derailment is a tragic story. You may have heard about the fatal train accident in New York City’s Bronx or the recent incident in Philadelphia where a train hauling crude oil was dangling over a river. The US federal government has seen more oil spilled in rail incidents in 2013 than was spilled in the nearly four decades since it began collecting data. The demand for preventative measures is greater than ever.
Train derailment is typically due to equipment failure, specifically in the ball bearings of a wheel. Today, train operators have routine schedules to swap out wheels and engines without fully knowing if the equipment is used beyond repair. Or in worse case scenarios, damaged equipment is not replaced in time to prevent failure and accidents.
In addition to performance, train operators face fierce competition from alternative transportation providers and must find ways to offer better amenities and services to retain and attract new passengers.
These are just a few of the concerns rail companies are hoping IoT and Cisco will address.
So if we go back to the examples we shared with you earlier, an 819 router sitting on a freight train can monitor the ball-bearings and monitor the utility of bearing to let you know if its overheating or has worn down to 35% of useful material. An alert can be sent to the train operator notifying him to pull over at the next available station or to stop and repair the wheel.
UK Rail, $21B per year to operate
If oil companies are not stressing over potential spills from train derailments, they fear the damage and lost revenue from a major pipeline spill. In some parts of the world, oil pipelines stretch across thousands of kilometers carrying hundreds of thousands barrels of oil per day. Today, pipeline leaks are discovered days after the initial spill and only because someone in a near by community complains about a foul odor in the air.
Pipelines aren’t the only things suffering from undetected leaks. In recent news, a storage unit at a chemical plant spilled 7,500 gallons of toxic substance into the ground, leaving 300,000 West Virginia residents without usable water for days.
These are three of many examples we’ve heard from our customers and we believe that they can overcome these challenges by connecting their trains, traffic lights, or pipeline sensors to the network. These companies need more than the ability to connect, they need a way to manage the terabytes of data and send commands to respond to critical alerts without compromising the speed of sending the commands and adding significant costs to move the data around the network. This requires a new way in how data is computed and stored.
Cisco IOx offers a way to deploy data aggregation and other critical applications across those thousands of kilometers of oil pipelines. Sensors can monitor pressure measurements, flow rates, or video footage of the surrounding area. If pressure were to drop or if the video captures fluid pooling on the ground, commands can be sent right on the pipeline to slow down the pumping of oil and send an alert to dispatch the closest maintenance crew.
Manufacturing plants use a lot of energy and, when they go above a certain utilization rate, they’re charged more per unit of energy. So if they can figure out how to even out their usage to avoid spikes, they can save money.
Right now, most manufacturers have a separate IT set-up and a separate network for the manufacturing plant versus headquarters. To shave those energy peaks you need to know a few things. First, you need to know what’s going to be built when. That information comes from the “Master Execution Scheduler” which is kept on the proprietary manufacturing network. But you also want to know what’s been committed to customers so you don’t save money on energy yet drive away customers in the process. That information is in your ERP system on your corporate network. And then you want to know how changing the schedule might affect labor costs, so you don’t lose all the money you saved on energy, making the whole exercise pointless. For that, you need information from your HR system, also on your corporate network. Then you need to analyze the information.
Once you’ve brought all the right systems together, you can build an application with thresholds and policies that alert operators to an approaching peak and show gaps in the schedule—times they could push the production load to. Or they can shift production to another plant with more capacity. But that requires adjusting supply chain, MRP, and the factory build plan to compensate without impacting customer commitments or desired inventory levels. Or they can check the power co-generation system to see if they can keep production high but use co-gen energy to avoid the peak.
But something interesting happens, once you’ve created your killer app….