This document discusses how to capture, analyze, and react to IoT sensor data in real-time. It notes that the amount of IoT data will grow exponentially in coming years, but most data is never analyzed or used. It also explains that the value of most IoT data decays rapidly. The document then provides examples of new low-cost IoT sensors and discusses MQTT as a lightweight protocol for transmitting sensor data. It outlines how to use Apache Spark and machine/deep learning on historical and streaming data to build models. Finally, it discusses challenges like the computational complexity of neural networks and envisions applications of connected vehicles.
Keynote: Artificial Intelligence Methods for Time Series Forecasting and Classification of Real-Time IoT Sensor Data Streams, Romeo Kienzler, Chief Data Scientist - IBM Watson IoT WW, IBM Academy of Technology
Supercharging Data Performance for Real-Time Data Analysis Ryft
The velocity and volume of data are growing faster than ever before, and companies are looking for new methods to speed their data analytics. Using an innovative FPGA-based architecture, the Ryft ONE supercharges data analytics and provides you more value from your data.
Changing the Way Viacom Looks at Video Performance with Mark Cohen and Michae...Databricks
Video is everything at Viacom. They build their own video players on iOS, Android and web platforms, and they have to know how those players are performing so they track critical metrics in near real-time with Apache Kafka, Spark and the Databricks platform.
In this session, Viacom will share how a quick proof of concept turned into a system that is giving them real insights into their video player performance. They will also discuss investigating platforms like Druid for fast slicing and dicing of data for business-oriented users.
One of the key takeaways is learning how, as engineers, we should work to drive value through technology, even if we work for a company that may not be tech first. Also, the data you collect can be a distraction, so create focus. Lastly, different users require different interfaces into the same data; learn how Viacom made that happen through technology, even if we work for a company that may not be tech first.
— The data you collect can be a distraction so create focus.
— Different users require different interfaces into the same data. We’ll talk about how we made that happen.
DeepLearning is not just a hype - it outperforms state-of-the-art ML algorithms. One by one. In this talk we will show how DeepLearning can be used for detecting anomalies on IoT sensor data streams at high speed using DeepLearning4J on top of different BigData engines like ApacheSpark and ApacheFlink. Key in this talk is the absence of any large training corpus since we are using unsupervised machine learning - a domain current DL research threats step-motherly. As we can see in this demo LSTM networks can learn very complex system behavior - in this case data coming from a physical model simulating bearing vibration data. Once draw back of DeepLearning is that normally a very large labaled training data set is required. This is particularly interesting since we can show how unsupervised machine learning can be used in conjunction with DeepLearning - no labeled data set is necessary. We are able to detect anomalies and predict braking bearings with 10 fold confidence. All examples and all code will be made publicly available and open sources. Only open source components are used.
Best Practices for Engineering Production-Ready Software with Apache SparkDatabricks
Notebooks are a great tool for Big Data. They have drastically changed the way scientists and engineers develop and share ideas. However, most world-class Spark products cannot be easily engineered, tested and deployed just by modifying or combining notebooks. Taking a prototype to production with high quality typically involves proper software engineering.
How Nielsen Utilized Databricks for Large-Scale Research and Development with...Spark Summit
Large-scale testing of new data products or enhancements to existing products in a research and development environment can be a technical challenge for data scientists. In some cases, tools available to data scientists lack production-level capacity, whereas other tools do not provide the algorithms needed to run the methodology. At Nielsen, the Databricks platform provided a solution to both of these challenges. This breakout session will cover a specific Nielsen business case where two methodology enhancements were developed and tested at large-scale using the Databricks platform. Development and large-scale testing of these enhancements would not have been possible using standard database tools.
This Time, It’s Personal: Why Security and the IoT Is DifferentJustin Grammens
Unfortunately, in recent years we’ve seen a host of incidences where IoT devices were compromised. Sometimes these have been minor with little coverage, while others like Mirai affected millions around the globe a produced serious economic impact. When attacks like this occur, they not only erode the trust of the users of these devices, but cause those who are looking to adopt this new technology to pause. With any new technology, security must be thought of as a first class citizen and when we are talking about IoT, the data is personal. As the IoT matures, I’ll share some mistakes that have happened in the past, where we are today and how I believe we are now finally seeing a maturity of devices that are remotely updated, fault tolerant and secure. When it comes to building an IoT device, security is personal.
Keynote: Artificial Intelligence Methods for Time Series Forecasting and Classification of Real-Time IoT Sensor Data Streams, Romeo Kienzler, Chief Data Scientist - IBM Watson IoT WW, IBM Academy of Technology
Supercharging Data Performance for Real-Time Data Analysis Ryft
The velocity and volume of data are growing faster than ever before, and companies are looking for new methods to speed their data analytics. Using an innovative FPGA-based architecture, the Ryft ONE supercharges data analytics and provides you more value from your data.
Changing the Way Viacom Looks at Video Performance with Mark Cohen and Michae...Databricks
Video is everything at Viacom. They build their own video players on iOS, Android and web platforms, and they have to know how those players are performing so they track critical metrics in near real-time with Apache Kafka, Spark and the Databricks platform.
In this session, Viacom will share how a quick proof of concept turned into a system that is giving them real insights into their video player performance. They will also discuss investigating platforms like Druid for fast slicing and dicing of data for business-oriented users.
One of the key takeaways is learning how, as engineers, we should work to drive value through technology, even if we work for a company that may not be tech first. Also, the data you collect can be a distraction, so create focus. Lastly, different users require different interfaces into the same data; learn how Viacom made that happen through technology, even if we work for a company that may not be tech first.
— The data you collect can be a distraction so create focus.
— Different users require different interfaces into the same data. We’ll talk about how we made that happen.
DeepLearning is not just a hype - it outperforms state-of-the-art ML algorithms. One by one. In this talk we will show how DeepLearning can be used for detecting anomalies on IoT sensor data streams at high speed using DeepLearning4J on top of different BigData engines like ApacheSpark and ApacheFlink. Key in this talk is the absence of any large training corpus since we are using unsupervised machine learning - a domain current DL research threats step-motherly. As we can see in this demo LSTM networks can learn very complex system behavior - in this case data coming from a physical model simulating bearing vibration data. Once draw back of DeepLearning is that normally a very large labaled training data set is required. This is particularly interesting since we can show how unsupervised machine learning can be used in conjunction with DeepLearning - no labeled data set is necessary. We are able to detect anomalies and predict braking bearings with 10 fold confidence. All examples and all code will be made publicly available and open sources. Only open source components are used.
Best Practices for Engineering Production-Ready Software with Apache SparkDatabricks
Notebooks are a great tool for Big Data. They have drastically changed the way scientists and engineers develop and share ideas. However, most world-class Spark products cannot be easily engineered, tested and deployed just by modifying or combining notebooks. Taking a prototype to production with high quality typically involves proper software engineering.
How Nielsen Utilized Databricks for Large-Scale Research and Development with...Spark Summit
Large-scale testing of new data products or enhancements to existing products in a research and development environment can be a technical challenge for data scientists. In some cases, tools available to data scientists lack production-level capacity, whereas other tools do not provide the algorithms needed to run the methodology. At Nielsen, the Databricks platform provided a solution to both of these challenges. This breakout session will cover a specific Nielsen business case where two methodology enhancements were developed and tested at large-scale using the Databricks platform. Development and large-scale testing of these enhancements would not have been possible using standard database tools.
This Time, It’s Personal: Why Security and the IoT Is DifferentJustin Grammens
Unfortunately, in recent years we’ve seen a host of incidences where IoT devices were compromised. Sometimes these have been minor with little coverage, while others like Mirai affected millions around the globe a produced serious economic impact. When attacks like this occur, they not only erode the trust of the users of these devices, but cause those who are looking to adopt this new technology to pause. With any new technology, security must be thought of as a first class citizen and when we are talking about IoT, the data is personal. As the IoT matures, I’ll share some mistakes that have happened in the past, where we are today and how I believe we are now finally seeing a maturity of devices that are remotely updated, fault tolerant and secure. When it comes to building an IoT device, security is personal.
JC Martin
Distinguished Architect
eBay
ONS2015: http://bit.ly/ons2015sd
ONS Inspire! Webinars: http://bit.ly/oiw-sd
Watch the talk (video) on ONS Content Archives: http://bit.ly/ons-archives-sd
Discover the world of IoT and how they're shaping our world with a hands-on approach. Affordable, internet-connected devices are becoming ubiquitous - with the rise of Arduino, Raspberry Pi, and the Particle Photon, it's now possible to quickly prototype and design an internet-ready device that monitors weather patterns, responds to movement, or collects and transmits data to the cloud for under $100. In this full-day workshop, we'll begin with a hands-on introduction to IoT and build IoT devices. With a Raspberry Pi 2 kit running Windows 10 IoT Core, we’ll build a simple temperature sensor, collecting ambient temperature readings, and stream the data to an Azure IoT Hub. Once the data is in Azure, we’ll analyze it with Azure Stream Analytics, and ship it to an Azure SQL Database. Finally, we’ll report on the data and build dashboards of our temperature readings using Power BI.
Scalable Open-Source IoT Solutions on Microsoft AzureMaxim Ivannikov
Scalable Open-Source IoT Solutions from gateways to the Cloud using DeviceHive, Ubuntu Snappy Core and Microsoft Azure.
The presentation was used during the NY Open-Source IoT Solutions Summit on November 12, 2015.
Spark Summit Europe 2016 Keynote - Databricks CEO Databricks
Machine learning algorithm itself is rarely the main barrier in building AI applications. Instead, the real culprit is the set of complex systems that prepares large-scale training and test data for the ML algorithms.
Apache Spark is a huge leap forward in democratizing AI. However, it does not solve all the problems. Databricks CEO Ali Ghodsi explains how Databricks democratizes AI by making it easier to build end-to-end machine learning pipelines with Apache Spark.
Consolidating MLOps at One of Europe’s Biggest AirportsDatabricks
At Schiphol airport we run a lot of mission critical machine learning models in production, ranging from models that predict passenger flow to computer vision models that analyze what is happening around the aircraft. Especially now in times of Covid it is paramount for us to be able to quickly iterate on these models by implementing new features, retraining them to match the new dynamics and above all to monitor them actively to see if they still fit the current state of affairs.
To achieve those needs we rely on MLFlow but have also integrated that with many of our other systems. So have we written Airflow operators for MLFlow to ease the retraining of our models, have we integrated MLFlow deeply with our CI pipelines and have we integrated it with our model monitoring tooling.
In this talk we will take you through the way we rely on MLFlow and how that enables us to release (sometimes) multiple versions of a model per week in a controlled fashion. With this set-up we are achieving the same benefits and speed as you have with a traditional software CI pipeline.
Hai Tao at AI Frontiers: Deep Learning For Embedded Vision SystemAI Frontiers
This presentation will demonstrate our recent progress in developing advanced computer vision algorithms using embedded platforms for video-based face recognition, vehicle attribute analysis, urban management event detection, and high-density crowd counting. These algorithms combine the traditional CV approach with recent advances in deep learning to make high-performance computer vision systems practical and enable products in several vertical markets including intelligent transportation systems (ITS), business intelligence (BI), and smart video surveillance. We will demonstrate algorithm design and optimization scheme for several recently available processors from Movidius, Nvidia, and ARM.
Building an intelligent big data application in 30 minutesClaudiu Barbura
Strata Barcelona presentation slides, a live demo of building an intelligent big data application from a web console. The tools and APIs behind are built on top of Spark, Spark SQL/Shark, Tachyon, Mesos, Cassandra, SolrCloud, iPython and include: ELT pipeline (ingestion and transformation), data warehouse explorer, export to NoSql and generated APIs, export to SolrCloud and generated APIs, predictive model building, training and publishing, dashboard UI, monitoring and instrumentation.
Data-Driven Transformation: Leveraging Big Data at Showtime with Apache SparkDatabricks
Interested in learning how Showtime is leveraging the power of Spark to transform a traditional premium cable network into a data-savvy analytical competitor? The growth in our over-the-top (OTT) streaming subscription business has led to an abundance of user-level data not previously available. To capitalize on this opportunity, we have been building and evolving our unified platform which allows data scientists and business analysts to tap into this rich behavioral data to support our business goals. We will share how our small team of data scientists is creating meaningful features which capture the nuanced relationships between users and content; productionizing machine learning models; and leveraging MLflow to optimize the runtime of our pipelines, track the accuracy of our models, and log the quality of our data over time. From data wrangling and exploration to machine learning and automation, we are augmenting our data supply chain by constantly rolling out new capabilities and analytical products to help the organization better understand our subscribers, our content, and our path forward to a data-driven future.
Authors: Josh McNutt, Keria Bermudez-Hernandez
Detecting Financial Fraud at Scale with Machine LearningDatabricks
Detecting fraudulent patterns at scale is a challenge given the massive amounts of data to sift through, the complexity of the constantly evolving techniques, and the very small number of actual examples of fraudulent behavior. In finance, added security concerns and the importance of explaining how fraudulent behavior was identified further increases the difficulty of the task. Legacy systems rely on rule-based detection that is difficult to implement and run at scale. The resulting code is very complex and brittle, making it difficult to update to keep up with new threats.
In this talk, we will go over how to convert a rule based financial fraud detection program to use machine learning on Spark as part of a scalable, modular solution. We will examine how to identify appropriate features and labels and how to create a feedback loop that will allow the model to evolve and improve overtime. We will also look at how MLflow may be leveraged throughout this effort for experiment tracking and model deployment.
Specifically, we will discuss:
-How to create a fraud-detection data pipeline
-How to leverage a framework for building features from large datasets
-How to create modular code to re-use and maintain new machine learning models
-How to choose appropriate models and algorithms for a given fraud-detection problem
See time series forecasting and automatic log data categorization in action firsthand. Elastic machine learning features have grown into a powerful tool that automates notifications for anomalies and simplifies tasks like pre-configuring NGINX log analysis at scale. Learn how to put them to work on your data.
Manlike machines have fascinated humans since ancient times. The modern robots start to take shape with the industrial revolution. In the 20th century robots were mostly industrial machines you would see in factories, like car factories.
Today, robots can have sensors, vision, they can hear and understand. They can connect to the cloud for more information. However, we are still in the early stages of robotics and robots will need to go a long way to become useful as a ubiquitous general purpose devices.
JC Martin
Distinguished Architect
eBay
ONS2015: http://bit.ly/ons2015sd
ONS Inspire! Webinars: http://bit.ly/oiw-sd
Watch the talk (video) on ONS Content Archives: http://bit.ly/ons-archives-sd
Discover the world of IoT and how they're shaping our world with a hands-on approach. Affordable, internet-connected devices are becoming ubiquitous - with the rise of Arduino, Raspberry Pi, and the Particle Photon, it's now possible to quickly prototype and design an internet-ready device that monitors weather patterns, responds to movement, or collects and transmits data to the cloud for under $100. In this full-day workshop, we'll begin with a hands-on introduction to IoT and build IoT devices. With a Raspberry Pi 2 kit running Windows 10 IoT Core, we’ll build a simple temperature sensor, collecting ambient temperature readings, and stream the data to an Azure IoT Hub. Once the data is in Azure, we’ll analyze it with Azure Stream Analytics, and ship it to an Azure SQL Database. Finally, we’ll report on the data and build dashboards of our temperature readings using Power BI.
Scalable Open-Source IoT Solutions on Microsoft AzureMaxim Ivannikov
Scalable Open-Source IoT Solutions from gateways to the Cloud using DeviceHive, Ubuntu Snappy Core and Microsoft Azure.
The presentation was used during the NY Open-Source IoT Solutions Summit on November 12, 2015.
Spark Summit Europe 2016 Keynote - Databricks CEO Databricks
Machine learning algorithm itself is rarely the main barrier in building AI applications. Instead, the real culprit is the set of complex systems that prepares large-scale training and test data for the ML algorithms.
Apache Spark is a huge leap forward in democratizing AI. However, it does not solve all the problems. Databricks CEO Ali Ghodsi explains how Databricks democratizes AI by making it easier to build end-to-end machine learning pipelines with Apache Spark.
Consolidating MLOps at One of Europe’s Biggest AirportsDatabricks
At Schiphol airport we run a lot of mission critical machine learning models in production, ranging from models that predict passenger flow to computer vision models that analyze what is happening around the aircraft. Especially now in times of Covid it is paramount for us to be able to quickly iterate on these models by implementing new features, retraining them to match the new dynamics and above all to monitor them actively to see if they still fit the current state of affairs.
To achieve those needs we rely on MLFlow but have also integrated that with many of our other systems. So have we written Airflow operators for MLFlow to ease the retraining of our models, have we integrated MLFlow deeply with our CI pipelines and have we integrated it with our model monitoring tooling.
In this talk we will take you through the way we rely on MLFlow and how that enables us to release (sometimes) multiple versions of a model per week in a controlled fashion. With this set-up we are achieving the same benefits and speed as you have with a traditional software CI pipeline.
Hai Tao at AI Frontiers: Deep Learning For Embedded Vision SystemAI Frontiers
This presentation will demonstrate our recent progress in developing advanced computer vision algorithms using embedded platforms for video-based face recognition, vehicle attribute analysis, urban management event detection, and high-density crowd counting. These algorithms combine the traditional CV approach with recent advances in deep learning to make high-performance computer vision systems practical and enable products in several vertical markets including intelligent transportation systems (ITS), business intelligence (BI), and smart video surveillance. We will demonstrate algorithm design and optimization scheme for several recently available processors from Movidius, Nvidia, and ARM.
Building an intelligent big data application in 30 minutesClaudiu Barbura
Strata Barcelona presentation slides, a live demo of building an intelligent big data application from a web console. The tools and APIs behind are built on top of Spark, Spark SQL/Shark, Tachyon, Mesos, Cassandra, SolrCloud, iPython and include: ELT pipeline (ingestion and transformation), data warehouse explorer, export to NoSql and generated APIs, export to SolrCloud and generated APIs, predictive model building, training and publishing, dashboard UI, monitoring and instrumentation.
Data-Driven Transformation: Leveraging Big Data at Showtime with Apache SparkDatabricks
Interested in learning how Showtime is leveraging the power of Spark to transform a traditional premium cable network into a data-savvy analytical competitor? The growth in our over-the-top (OTT) streaming subscription business has led to an abundance of user-level data not previously available. To capitalize on this opportunity, we have been building and evolving our unified platform which allows data scientists and business analysts to tap into this rich behavioral data to support our business goals. We will share how our small team of data scientists is creating meaningful features which capture the nuanced relationships between users and content; productionizing machine learning models; and leveraging MLflow to optimize the runtime of our pipelines, track the accuracy of our models, and log the quality of our data over time. From data wrangling and exploration to machine learning and automation, we are augmenting our data supply chain by constantly rolling out new capabilities and analytical products to help the organization better understand our subscribers, our content, and our path forward to a data-driven future.
Authors: Josh McNutt, Keria Bermudez-Hernandez
Detecting Financial Fraud at Scale with Machine LearningDatabricks
Detecting fraudulent patterns at scale is a challenge given the massive amounts of data to sift through, the complexity of the constantly evolving techniques, and the very small number of actual examples of fraudulent behavior. In finance, added security concerns and the importance of explaining how fraudulent behavior was identified further increases the difficulty of the task. Legacy systems rely on rule-based detection that is difficult to implement and run at scale. The resulting code is very complex and brittle, making it difficult to update to keep up with new threats.
In this talk, we will go over how to convert a rule based financial fraud detection program to use machine learning on Spark as part of a scalable, modular solution. We will examine how to identify appropriate features and labels and how to create a feedback loop that will allow the model to evolve and improve overtime. We will also look at how MLflow may be leveraged throughout this effort for experiment tracking and model deployment.
Specifically, we will discuss:
-How to create a fraud-detection data pipeline
-How to leverage a framework for building features from large datasets
-How to create modular code to re-use and maintain new machine learning models
-How to choose appropriate models and algorithms for a given fraud-detection problem
See time series forecasting and automatic log data categorization in action firsthand. Elastic machine learning features have grown into a powerful tool that automates notifications for anomalies and simplifies tasks like pre-configuring NGINX log analysis at scale. Learn how to put them to work on your data.
Manlike machines have fascinated humans since ancient times. The modern robots start to take shape with the industrial revolution. In the 20th century robots were mostly industrial machines you would see in factories, like car factories.
Today, robots can have sensors, vision, they can hear and understand. They can connect to the cloud for more information. However, we are still in the early stages of robotics and robots will need to go a long way to become useful as a ubiquitous general purpose devices.
Accelerating analytics on the Sensor and IoT Data. Keshav Murthy
Informix Warehouse Accelerator (IWA) has helped traditional
data warehousing performance to improve dramatically. Now,
IWA accelerates analytics over the sensor data stored in relational and timeseries data.
Trends in Sensors, Wearable Devices and IoTWalt Maclay
Today, it is all about being connected and staying connected. Low-cost sensors are revolutionizing medical, home health and wearable devices, as well as other internet of things gadgets. Walt Maclay explains how these smart devices are benefiting from the ongoing development of low-cost high-volume sensors. Whether it is temperature, pressure, vibration, acceleration, flow, sound or vision, it is all about sensors. They are critical to many advances and to the rapid innovation we are seeing today. In this video, Walt Maclay presents the latest trends and challenges he sees for sensors, wearable devices and IoT.
Machine Intelligence Applications for IoT Slam Dec 1st 2016Sudha Jamthe
IoT makes things smart while AI makes them intelligent. Come hear about Machine Intelligence, the power behind AI that drives business applications in all realms of life and industry. It ranges from looking for pathogens in human cells, to making Robots and Drones autonomous to drive prediction of demand or parts failure in factories. Machine Intelligence applications can drive business impact based on harnessing the underlying technologies of Machine Learning, Computer Vision, Facial Recognition and Speech to Text. Come find out how to take Machine Intelligence from the realm of quants and Data Scientists to build business applications for your business.
Brooks Instrument manufactures an array of flow, pressure, vacuum and level products for dozens of industries from pharmaceuticals, oil and gas, fuel cell research and chemicals, to medical devices, analytical instrumentation, semiconductor manufacturing.
Sensing as-a-Service - The New Internet of Things (IOT) Business ModelDr. Mazlan Abbas
Here's a chance to create new business models for Internet of Things. There are tons of benefits to gain from IOT and sensors. Its a matter of time when we can harness the creativity of the IOT Application Developers. Create a healthy eco-system so that everyone benefits.
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze the increasing economic feasibility of wearable electronics in health care applications. Rapid improvements in sensors, integrated circuits, transceivers, displays, mobile phones, and wireless networks are causing the cost to fall and the performance to rise for wearable applications. These slides analyze hand, head, and body worn electronics in detail including smart watches, wrist and finger devices, smart glasses and textiles, patches, and foot and arm wear. They also analyze a wide variety of sensors for collecting healthcare information including inertial, bio, chemical, and haptic sensors.
Pressure Handbook for Industrial Process Measurement and ControlMiller Energy, Inc.
Illustrated handbook provides clear explanation of pressure concepts and measurement. Various sensor technologies are explained and compared. Good quick reference.
Autonomous vehicles: becoming economically feasible through improvements in l...Jeffrey Funk
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze how autonomous vehicles are becoming economically feasible throug through improvements in lasers, microelectronic mechanical systems (MEMS), integrated circuits (ICs), and other components. Although the cost of the Google Car is currently about 150,000 USD, 30% annual improvements in lasers, MEMS, and ICs will make these economically feasible for a broad number of users in the next ten years. A key issue is when certain lanes, roads or even entire highway systems are restricted to automated vehicles. This would enable collision avoidance to rely more on between-vehicle communications. This would further reduce the cost of automated vehicles, stimulate diffusion, and also reduce transportation time and increase fuel efficiency.
Everything is changing from Health Care to the Automotive markets without forgetting Financial markets or any type of engineering everything has stopped being created as an individual or best-case scenario a team effort to something that is being developed and perfectioned by using AI and hundreds of computers.And even AI is something that we no longer can run in a single computer, no matter how powerful it is. What drives everything today is HPC or High-Performance Computing heavily linked to AI In this session we will discuss about AI, HPC computing, IBM Power architecture and how it can help develop better Healthcare, better Automobiles, better financials and better everything that we run on them
Smart Camera for Non-Intrusive Heart Detectionitaistam
At the recent years there is a rise in vision based applications from autonomous driving to smart cameras that perfect the picture based on the scene. Those application also drove the development of AI accelerators, that can effectively provide the needed computation for the mobile devices. Nevertheless, compared to the future devices, this is just a small glimpse. In this talk we will discuss some of the capabilities of future smart cameras, which today can automatically choose interesting scene or distinct between known people and strangers. However, those cameras can also be used to detect physical health parameters, like heart rate, for reliable and nonintrusive monitoring babies sleep. While some of the capabilities were available via cloud based computation and now this can be done in the node level (when privacy is the main, but not only, benefit of this advancement).
The von Neumann Memory Barrier and Computer Architectures for the 21st CenturyPerry Lea
Computer Architecture and the von Neumann memory Barrier. New computer architectures for the 21st century: neuromorphic computing, processing in memory, and dataflow computing. Applications to machine learning, AI, image processing and other use cases. Future Technology Conference 2018 - Vancouver BC
In this video from the HPC User Forum in Santa Fe, Yoonho Park from IBM presents: IBM Datacentric Servers & OpenPOWER.
"Big data analytics, machine learning and deep learning are among the most rapidly growing workloads in the data center. These workloads have the compute performance requirements of traditional technical computing or high performance computing, coupled with a much larger volume and velocity of data."
Watch the video: http://wp.me/p3RLHQ-gJv
Learn more: https://openpowerfoundation.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Presentation at IoT World, May 2016 in Santa Clara, CA. Session "Manage your IoT Sensor Data at the Edge! Control your IoT sensor data at the most appropriate spot" (Thursday, 12 May 2016. IoT & the Cloud Track)
IoT Slam Keynote: Harnessing the Flood of Data with Heterogeneous Computing a...Ryft
This presentation was delivered as the closing keynote for the 2015 IoT Slam virtual conference. During the presentation, Ryft VP of Engineering, Pat McGarry, took a close look at how the IoT revolution is changing data analytics and driving the move of data analysis to the network’s edge where the data is being created. - See more at: http://www.ryft.com/blog/2015-iot-slam-keynote-harnessing-flood-of-iot-data-with-heterogenenous-computing-at-the-edge#sthash.x1Anoapb.dpuf
CIF16: Building the Superfluid Cloud with Unikernels (Simon Kuenzer, NEC Europe)The Linux Foundation
The confluence of a number of relatively recent trends including the development of virtualization technologies, the deployment of micro datacenters at PoPs, and the availability of microservers, opens up the possibility of evolving the cloud, and the network it is connected to, towards a superfluid cloud: a model where parties other than infrastructure owners can quickly deploy and migrate virtualized services throughout the network (in the core, at aggregation points and at the edge), enabling a number of novel use cases including virtualized CPEs and on-the-fly services, among others. Towards this goal, we identify a number of required mechanisms and present early evaluation results of their implementation.
On an inexpensive commodity server, we are able to concurrently run up to 10,000 specialized virtual machines (based on unikernels), instantiate a VM in as little as 10 milliseconds, and migrate it in under 100 milliseconds.
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsRed_Hat_Storage
At Red Hat Storage Day Minneapolis on 4/12/16, Intel's Dan Ferber presented on Intel storage components, benchmarks, and contributions as they relate to Ceph.
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsColleen Corrice
At Red Hat Storage Day Minneapolis on 4/12/16, Intel's Dan Ferber presented on Intel storage components, benchmarks, and contributions as they relate to Ceph.
Analyzing data and driving business decisions to the edge of Internet-of-Things (IoT) is rapidly becoming critical for any IoT solution. And for real-time analysis of the data as it streams in is vital to many business processes. Informix, as the data management system of choice for IoT solutions delivers significant value proposition for businesses across all industry segments looking to deploy IoT Solutions. And with Apache Edgent/Quarks integration, you get real-time analysis of streaming IoT data.
IBM Watson Technical Deep Dive Swiss Group for Artificial Intelligence and Co...Romeo Kienzler
We are transitioning from the programmatic to the cognitive computing era.IBM Deep Blue won against the world champion in Chess 1996. IBM Watson won against the two world champions in the famous US quiz show "Jeopardy" 2011. Since then, the press heavily established the term "Cognitive Computing" to the public. I will explain how IBM Watson works internally and start with Algebraic Text Extraction. DeepQA is the heart of IBM Watson and I will explain each component of this pipeline, the linguistic preprocessor, hypothesis generation, hypothesis and evidence scoring, final matching based on supervised learning and confidence estimation. Finally, I conclude with an overview of actual use cases and outline the roadmap of future work.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
1. How to capture, analyse and react on IoT generated
sensor data in real time
Romeo Kienzler, Chief Data Scientist, IBM Watson IoT, WW
2. Why IoT (now) ?
• 15 Billion connected devices in 2015
• 40 Billion connected devices in 2020
• World population 7.4 Billion in 2016
3. Why IoT (now) ?
• 2016 90% of all data generated WW is at the
edge of an IoT device
• This data is never
• captured
• analysed
• acted on
4. Why IoT (now) ?
• 60% of data looses it’s value within milliseconds
of being generated
• New generation of Sensors
• low cost
• low energy consumption
• low data transmission cost
• long life batteries / self supplementary
5. • Energy consumption 0.33333333 µA
• Cost 5 US$
• 600 mA/h
• 70 days
• 1 measurement /h
• Cost 2 US$
• Energy consumption
• Standby 3µA
• Rx 30 mA
• Tx 53 mA
• Range 800m
• Cost 50 US$
6. Why IoT (now) ?
• If a tree falls in the forest we will hear it
• IBM announced to invest 3 billion US$
• Opened IBM Watson IoT Global HQ in Munich, Germany
• As of 2015
• 4000 IoT clients
170 countries
1400 partners
750 IoT patents
1000 Emloyees in HQ
7. IBM and Siemens
• IBM partners with Siemens Buildings
Technologies Division to maximise the
potential of connected buildings
• by the data they create (private side note)
8. IBM and KONE
• IBM partners with KONE on Cloud-based
Embedded intelligence in elevators and
escalators
9. IBM and KONE
• IBM partners with KONE on Cloud-
based Embedded intelligence in
elevators and escalators
11. How 2 IoT?
What is MQTT?
•“light weight” telemetry protocol
•Publish-Subscribe protocol via Message Broker
•Invented by IBM 1999
•OASIS Standard since 2013
19. ApacheSpark
the state-of-the-art in cloud based analytics
Storage Layer (OpenStack SWIFT / Hadoop HDFS / IBM GPFS)
Execution Layer (Spark Executor, YARN, Platform Symphony)
Hardware Layer (Bare Metal High Performance Cluster)
GraphXStreaming SQL MLLib BlinkDB R MLBase
Y
O
U
Intel Xeon E7-4850 v2 48 core, 3 TB RAM, 72 GB HDD, 10Gbps
S
T
R
E
A
M
S
22. online vs. historic
• Pros
• low storage costs
• real-time model update
• Cons
• algorithm support
• software support
• no algorithmic improvement
• compute power to be inline
with data rate
• Pros
• all algorithms
• abundance of software
• model re-scoring / re-
parameterisation (algorithmic
improvement)
• batch processing
• Cons
• high storage costs
• batch model update
38. •Outperformed traditional methods, such as
•cumulative sum (CUSUM)
•exponentially weighted moving average (EWMA)
•Hidden Markov Models (HMM)
•Learned what “Normal” is
•Raised error if time series pattern haven't been seen
before
39. Learning of a program
A LSTM network is touring complete
40. Problems
• Neural Networks are computationally very complex
•especially during training
•but also during scoring
CPU (2009) GPU (2016) IBM SyNAPSE (2018)
41. DeepLearning
the future in cloud based analytics
Storage Layer (OpenStack SWIFT / Hadoop HDFS / IBM GPFS)
Execution Layer (Spark Executor, YARN, Platform Symphony)
Hardware Layer (Bare Metal High Performance Cluster)
GraphXStreaming SQL MLLib BlinkDB
DeepLearning4J
ND4J
R MLBase H2O
Y
O
U
GPUAVX
Intel Xeon E7-4850 v2 48 core, 3 TB RAM, 72 GB HDD, 10Gbps, NVIDIA TESLA M60 GPU
(cu)BLAS
jcuBLAS
S
T
R
E
A
M
S
42. Why IoT (now) ?
Formal Definition (Romeo Kienzler, 2016)
Cognitive IoT maximises efficiency of the system
under observation by measuring all relevant
parameters in order to (re)act accordingly to
push the system into a state near to the global
optimum
43. My Vision
What if the majority of cars where connected
and sensed? What if we can detect a state of
unpreventable accidents? What if in such a case
we just issue a 30% brake command to all
vehicles? Still a dream?…
44. Do it yourself…
• DeepLearning Architecture on-click cloud
deployment
• to be published:
http://www.ibm.com/developerworks/analytics/
• to be announced:
Twitter: @romeokienzler
• Find this talk on youtube:
http://ibm.biz/romeokienzler