Vertex Perspectives | AI Optimized Chipsets | Part IVVertex Holdings
In this instalment, we delve into other emerging technologies including neuromorphic chips and quantum computing systems, to examine their promise as alternative AI-optimized chipsets.
Vertex Perspectives | AI Optimized Chipsets | Part IIVertex Holdings
Deep learning is both computationally and memory intensive, necessitating enhancements in processor performance. In this issue, we explore how this has led to the rise of startups adopting alternative, innovative approaches and how it is expected to pave the way for different types of AI-optimized chipsets.
Deep learning @ Edge using Intel's Neural Compute Stickgeetachauhan
Talk @ Intel Global IoT DevFest, Nov 2017
The new generation of hardware accelerators are enabling rich AI driven, Intelligent IoT solutions @ the edge.
The talk showcased how to use Intel's latest Nervana Compute Stick for accelerating deep learning IoT solutions. It also covered use cases and code details for running Deep Learning models on Intel's Nervana Compute Stick.
Artificial Intelligence (AI), specifically deep learning, is revolutionizing industries, products, and core capabilities by delivering dramatically enhanced experiences. However, the deep neural networks of today use too much memory, compute, and energy. Plus, to make AI truly ubiquitous, networks need to run on the end device within a tight power and thermal budget. One approach to help address these issues is quantization, which attempts to reduce the number of bits used for weight parameters and activation calculations without sacrificing model accuracy. This presentation covers: why quantization is important, existing quantization challenges, Qualcomm AI Research's existing quantization research, and how developers and researchers can take advantage of quantization on Qualcomm Snapdragon.
Vertex Perspectives | AI Optimized Chipsets | Part IVVertex Holdings
In this instalment, we delve into other emerging technologies including neuromorphic chips and quantum computing systems, to examine their promise as alternative AI-optimized chipsets.
Vertex Perspectives | AI Optimized Chipsets | Part IIVertex Holdings
Deep learning is both computationally and memory intensive, necessitating enhancements in processor performance. In this issue, we explore how this has led to the rise of startups adopting alternative, innovative approaches and how it is expected to pave the way for different types of AI-optimized chipsets.
Deep learning @ Edge using Intel's Neural Compute Stickgeetachauhan
Talk @ Intel Global IoT DevFest, Nov 2017
The new generation of hardware accelerators are enabling rich AI driven, Intelligent IoT solutions @ the edge.
The talk showcased how to use Intel's latest Nervana Compute Stick for accelerating deep learning IoT solutions. It also covered use cases and code details for running Deep Learning models on Intel's Nervana Compute Stick.
Artificial Intelligence (AI), specifically deep learning, is revolutionizing industries, products, and core capabilities by delivering dramatically enhanced experiences. However, the deep neural networks of today use too much memory, compute, and energy. Plus, to make AI truly ubiquitous, networks need to run on the end device within a tight power and thermal budget. One approach to help address these issues is quantization, which attempts to reduce the number of bits used for weight parameters and activation calculations without sacrificing model accuracy. This presentation covers: why quantization is important, existing quantization challenges, Qualcomm AI Research's existing quantization research, and how developers and researchers can take advantage of quantization on Qualcomm Snapdragon.
The digital world is facing a crisis that has at the same time opened new windows of opportunity. To tackle the shortage of potential leaders joining the digital sector, the Schaffhausen Institute of Technology (SIT) has crafted a new course: Masters of Science (MSc) in Computer Science and Software Engineering – to better prepare graduates for leadership roles, specifically within the IT and Science disciplines.
At the #SITinsights in Technology talk, we’re blending computing and economics, bringing knowledge and expertise from all relevant fields to help enable global efforts.
About Schaffhausen Institute of Technology:
With its pioneering curriculum, the Schaffhausen Institute of Technology (SIT) offers a new model of education. Focusing on the most important areas of technology, SIT will drive research, development and innovation in a next generation learning and research environment. Using state-of-the-art facilities, SIT's students, researchers and business allies will address large-scale world problems by developing a technology curriculum based on global issues.
Context-aware systems represent extremely complex and heterogeneous systems. The need for middleware to bind components together is well recognized and many attempts to build middleware for context-aware systems have been made.
We provide a general introduction about the evolution of the middlewares and then we proceed with an analysis of the requirements and the issues for context-aware middleware.
Intelligent Internet of Things (IIoT): System Architectures and Communica...Raghu Nandy
Internet of Things (IoT) can be designed by various approaches with optimistic technology choices. This paper focuses on comparing recent studies on architectural choices and communication approaches for IoT Systems. Understanding Goals of an IoT system and inventing a general prototype for general IoT solutions is uniquely challenging. Existing research prototypes provide us information about IoT systems and their challenges. Existing architectures and communication approaches such as such as Service Oriented Architecture (SOA), Instant Messaging (XMPP) and Web-Sockets Service can be used to develop a general IoT System prototype. SOA provides centralized/decentralized IoT systems. Instant Message services such as XMPP can be used to build distributed and secure IoT platforms. Web-sockets also used to build scalable IoT systems. Overall the choice depends on IoT system Goal and limitations. Intelligent IoT (IIoT) Systems can be seen as decision making system. IoT systems can be built on Cloud infrastructures With Sensor Event as a Service (SEaaS) - Cloud Sensor networks can enable applications to access on-demand real-time sensor data. A generic IoT platform can be built and extended to newer applications and platforms.
Ensemble of Probabilistic Learning Networks for IoT Edge Intrusion Detection IJCNCJournal
This paper proposes an intelligent and compact machine learning model for IoT intrusion detection using an ensemble of semi-parametric models with Ada boost. The proposed model provides an adequate realtime intrusion detection at an affordable computational complexity suitable for the IoT edge networks. The proposed model is evaluated against other comparable models using the benchmark data on IoT-IDS and shows comparable performance with reduced computations as required.
From Physical to Virtual Wireless Sensor Networks using Cloud Computing IJORCS
In the modern world, billions of physical sensors are used for various dedications: Environment Monitoring, Healthcare, Education, Defense, Manufacturing, Smart Home, Agriculture Precision and others. Nonetheless, they are frequently utilized by their own applications and thereby snubbing the significant possibilities of sharing the resources in order to ensure the availability and performance of physical sensors. This paper assumes that the immense power of the Cloud can only be fully exploited if it is impeccably integrated into our physical lives. The principal merit of this work is a novel architecture where users can share several types of physical sensors easily and consequently many new services can be provided via a virtualized structure that allows allocation of sensor resources to different users and applications under flexible usage scenarios within which users can easily collect, access, process, visualize, archive, share and search large amounts of sensor data from different applications. Moreover, an implementation has been achieved using Arduino-Atmega328 as hardware platform and Eucalyptus/Open Stack with Orchestra-Juju for Private Sensor Cloud. Then this private Cloud has been connected to some famous public clouds such as Amazon EC2, ThingSpeak, SensorCloud and Pachube. The testing was successful at 80%. The recommendation for future work would be to improve the effectiveness of virtual sensors by applying optimization techniques and other methods.
International Journal of Ubiquitous Computing (IJU) is a quarterly open access peer-reviewed journal provides excellent international forum for sharing knowledge and results in theory, methodology and applications of ubiquitous computing. Current information age is witnessing a dramatic use of digital and electronic devices in the workplace and beyond. Ubiquitous Computing presents a rather arduous requirement of robustness, reliability and availability to the end user. Ubiquitous computing has received a significant and sustained research interest in terms of designing and deploying large scale and high performance computational applications in real life. The aim of the journal is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
Accelerating algorithmic and hardware advancements for power efficient on-dev...Qualcomm Research
Artificial Intelligence (AI), specifically deep learning, is revolutionizing industries, products, and core capabilities by delivering dramatically enhanced experiences. However, the deep neural networks of today are growing quickly in size and use too much memory, compute, and energy. Plus, to make AI truly ubiquitous, it needs to run on the end device within a tight power and thermal budget. One approach to address these issues is Bayesian deep learning. This presentation covers:
• Why AI algorithms and hardware need to be energy efficient
• How Bayesian deep learning is making neural networks more power efficient through model compression and quantization
• How we are doing fundamental research on AI algorithms and hardware to maximize power efficiency
Charith Perera, Arkady Zaslavsky, Peter Christen, Ali Salehi, Dimitrios Georgakopoulos, Capturing Sensor Data from Mobile Phones using Global Sensor Network Middleware, Proceedings of the IEEE 23rd International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC), Sydney, Australia, September, 2012
The digital world is facing a crisis that has at the same time opened new windows of opportunity. To tackle the shortage of potential leaders joining the digital sector, the Schaffhausen Institute of Technology (SIT) has crafted a new course: Masters of Science (MSc) in Computer Science and Software Engineering – to better prepare graduates for leadership roles, specifically within the IT and Science disciplines.
At the #SITinsights in Technology talk, we’re blending computing and economics, bringing knowledge and expertise from all relevant fields to help enable global efforts.
About Schaffhausen Institute of Technology:
With its pioneering curriculum, the Schaffhausen Institute of Technology (SIT) offers a new model of education. Focusing on the most important areas of technology, SIT will drive research, development and innovation in a next generation learning and research environment. Using state-of-the-art facilities, SIT's students, researchers and business allies will address large-scale world problems by developing a technology curriculum based on global issues.
Context-aware systems represent extremely complex and heterogeneous systems. The need for middleware to bind components together is well recognized and many attempts to build middleware for context-aware systems have been made.
We provide a general introduction about the evolution of the middlewares and then we proceed with an analysis of the requirements and the issues for context-aware middleware.
Intelligent Internet of Things (IIoT): System Architectures and Communica...Raghu Nandy
Internet of Things (IoT) can be designed by various approaches with optimistic technology choices. This paper focuses on comparing recent studies on architectural choices and communication approaches for IoT Systems. Understanding Goals of an IoT system and inventing a general prototype for general IoT solutions is uniquely challenging. Existing research prototypes provide us information about IoT systems and their challenges. Existing architectures and communication approaches such as such as Service Oriented Architecture (SOA), Instant Messaging (XMPP) and Web-Sockets Service can be used to develop a general IoT System prototype. SOA provides centralized/decentralized IoT systems. Instant Message services such as XMPP can be used to build distributed and secure IoT platforms. Web-sockets also used to build scalable IoT systems. Overall the choice depends on IoT system Goal and limitations. Intelligent IoT (IIoT) Systems can be seen as decision making system. IoT systems can be built on Cloud infrastructures With Sensor Event as a Service (SEaaS) - Cloud Sensor networks can enable applications to access on-demand real-time sensor data. A generic IoT platform can be built and extended to newer applications and platforms.
Ensemble of Probabilistic Learning Networks for IoT Edge Intrusion Detection IJCNCJournal
This paper proposes an intelligent and compact machine learning model for IoT intrusion detection using an ensemble of semi-parametric models with Ada boost. The proposed model provides an adequate realtime intrusion detection at an affordable computational complexity suitable for the IoT edge networks. The proposed model is evaluated against other comparable models using the benchmark data on IoT-IDS and shows comparable performance with reduced computations as required.
From Physical to Virtual Wireless Sensor Networks using Cloud Computing IJORCS
In the modern world, billions of physical sensors are used for various dedications: Environment Monitoring, Healthcare, Education, Defense, Manufacturing, Smart Home, Agriculture Precision and others. Nonetheless, they are frequently utilized by their own applications and thereby snubbing the significant possibilities of sharing the resources in order to ensure the availability and performance of physical sensors. This paper assumes that the immense power of the Cloud can only be fully exploited if it is impeccably integrated into our physical lives. The principal merit of this work is a novel architecture where users can share several types of physical sensors easily and consequently many new services can be provided via a virtualized structure that allows allocation of sensor resources to different users and applications under flexible usage scenarios within which users can easily collect, access, process, visualize, archive, share and search large amounts of sensor data from different applications. Moreover, an implementation has been achieved using Arduino-Atmega328 as hardware platform and Eucalyptus/Open Stack with Orchestra-Juju for Private Sensor Cloud. Then this private Cloud has been connected to some famous public clouds such as Amazon EC2, ThingSpeak, SensorCloud and Pachube. The testing was successful at 80%. The recommendation for future work would be to improve the effectiveness of virtual sensors by applying optimization techniques and other methods.
International Journal of Ubiquitous Computing (IJU) is a quarterly open access peer-reviewed journal provides excellent international forum for sharing knowledge and results in theory, methodology and applications of ubiquitous computing. Current information age is witnessing a dramatic use of digital and electronic devices in the workplace and beyond. Ubiquitous Computing presents a rather arduous requirement of robustness, reliability and availability to the end user. Ubiquitous computing has received a significant and sustained research interest in terms of designing and deploying large scale and high performance computational applications in real life. The aim of the journal is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
Accelerating algorithmic and hardware advancements for power efficient on-dev...Qualcomm Research
Artificial Intelligence (AI), specifically deep learning, is revolutionizing industries, products, and core capabilities by delivering dramatically enhanced experiences. However, the deep neural networks of today are growing quickly in size and use too much memory, compute, and energy. Plus, to make AI truly ubiquitous, it needs to run on the end device within a tight power and thermal budget. One approach to address these issues is Bayesian deep learning. This presentation covers:
• Why AI algorithms and hardware need to be energy efficient
• How Bayesian deep learning is making neural networks more power efficient through model compression and quantization
• How we are doing fundamental research on AI algorithms and hardware to maximize power efficiency
Charith Perera, Arkady Zaslavsky, Peter Christen, Ali Salehi, Dimitrios Georgakopoulos, Capturing Sensor Data from Mobile Phones using Global Sensor Network Middleware, Proceedings of the IEEE 23rd International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC), Sydney, Australia, September, 2012
Novi Sad AI is the first AI community in Serbia with goal of democratizing knowledge of AI. On our first event we talked about Belief networks, Deep learning and many more.
Tom Soderstrom, Chief Technology and Innovation Officer at NASA’s Jet Propulsion Laboratory, has demonstrated how internet-of-things (IoT) technology and cloud computing can form the backbone for monumental innovation. This combination has enabled private and public space exploration enterprises to dare greatly and, together, discover more of the solar system than ever before. Cloud computing, with its unlimited storage and compute resources, blends IoT, machine learning, intelligent assistance, and new interfaces with computers. It has the potential to allow humans to explore and colonize other areas of the solar system by enabling collaboration across millions of miles, and social networking on a planetary scale.
Internet of Things is an idea under development. It is the future connecting the Smart devices to the Internet. Interested to know more about the current developments and the future road map of this project then this presentation is for you.
Things like growing volumes and varieties of available data, cheaper and more powerful computational processing, data storage and large-value predictions that can guide better decisions and smart actions in real time without human intervention are playing critical role in this age. All of these require models that can automatically analyse large complex data and deliver quick accurate results – even on a very large scale. Machine learning plays a significant role in developing these models. The applications of machine learning range from speech and object recognition to analysis and prediction of finance markets. Artificial Neural Network is one of the important algorithms of machine learning that is inspired by the structure and functional aspects of the biological neural networks. In this paper, we discuss the purpose, representation and classification methods for developing hardware for machine learning with the main focus on neural networks. This paper also presents the requirements, design issues and optimization techniques for building hardware architecture of neural networks.
Things like growing volumes and varieties of available data, cheaper and more powerful computational processing, data storage and large-value predictions that can guide better decisions and smart actions inreal time without human intervention are playing critical role in this age. All of these require models thatcan automatically analyse large complex data and deliver quick accurate results – even on a very largescale. Machine learning plays a significant role in developing these models. The applications of machinelearning range from speech and object recognition to analysis and prediction of finance markets. Artificial Neural Network is one of the important algorithms of machine learning that is inspired by the structure and functional aspects of the biological neural networks. In this paper, we discuss the purpose, representationand classification methods for developing hardware for machine learning with the main focus on neuralnetworks. This paper also presents the requirements, design issues and optimization techniques for buildinghardware architecture of neural networks.
Presented at the Intel Global IoT DevFest (Oct 2017)
- Real-world use cases: healthcare, building management, retail, smart cities, transportation
- Time-series analysis
- AI / ML overview & applications
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Accelerate your Kubernetes clusters with Varnish Caching
Dl 0n mobile jeff shomaker_jan-2018_final
1. Presentation at Global AI Conference
Santa Clara, CA
1-17-18
Jeff Shomaker
Founder, 21 SP, Inc.
Deep Learning on
Mobile Devices
2. 21 SP, Inc.
Proprietary and Confidential 2
Introduction
• Neural network (NN) software will increasingly be available on phones,
watches, drones, sensors and other devices.
• Talk will cover:
– AI-Capable Smartphones
– Deep Learning (DL) on mobile devices
• Hardware
• Software
– Future applications
• Wireless communications
• Unmanned aerial vehicles (UAVs)
• Virtual reality
• Internet of Things (IoT)
3. 21 SP, Inc.
Proprietary and Confidential
AI-Capable Smartphone Forecast 1)
3
Future smartphones will use specialized AI-capable chips
4. 21 SP, Inc.
Proprietary and Confidential
2018 Smartphones with AI 2)
– Samsung S8/S8 Plus/Note 8
• Bixby is their AI-based assistant that can learn the user’s behavior and
then act on that information.
– Honor View 10 (Huawei)
• This is the most sophisticated of the AI-based phones
• Their system allows: 1) secure face unlocking, 2) camera benefits, 3)
translations, 4) enhanced automation, and 5) voice assistance.
– Apple iPhone X
• Has sophisticated face ID for security
• Improved voice recognition for SIRI.
– Google Pixel 2/Pixel 2 XL
• Improved imaging capabilities on cameras
• Improved Google Assistant language understanding.
4
5. 21 SP, Inc.
Proprietary and Confidential
AI-Capable Smartphone Chipmakers 1)
–Apple is a powerful player with its All Bionic Chip. The company is
expecting its facial recognition technology to drive AI on smartphone
adoption.
• Apple is forecasted to have the strongest sales.
–Qualcomm will likely capture the second highest sales volume.
• Qualcomm’s processors make up 40 percent of the Android
mobile market.
• In 2017, they released a SDK for their Neural Processing
Engine so developers can run NNs on their processors.
–Huawei is an emerging participant and in 2017 stated their phones
will include a dedicated Neural Processing Unit on their Kirin 970
dedicated AI Chip.
5
6. 21 SP, Inc.
Proprietary and Confidential
Nvidia Wants AI on the Edge 3)
– Nvidia’s graphics processing units (GPUs) have been driving a lot
of the growth in deep learning.
– The company is now focusing on embedding AI technology into
edge devices such as security cameras and drones.
– According to a company VP, there are four reasons to shift
processing from the cloud to local devices:
• Bandwidth – capacity to send data from an estimated 1 billion security
cameras in 2020 is unlikely to be available.
• Latency – Some applications require extremely fast decisions. Self-
driving cars are one example.
• Privacy – Transferring personal information can increase security risk.
• Availability – In many parts of the world, cloud availability is limited or
intermittent. Emergency services need 100% availability, for example.
6
7. 21 SP, Inc.
Proprietary and Confidential
Nvidia’s Jetson TX2 Platform 3)
– In 2017, Nvidia announced it’s new credit-card sized module for
embedded AI-based computing.
– The unit, runs twice as fast as the TX1, per Nvidia.
– At an event in SF, the following companies stated they will be using
the new AI internal processing chip on edge devices:
• Cisco will use it on a new device that, for example, will recognize
people speaking at a meeting and focus the camera on them.
• Artec will use it to cut the link to the cloud (their earlier products) by
creating 3D scanning images in real time at the edge.
• Teal Drones is expected to ship a $1,300 drone that can react to what
it sees. One possible use is for counting cattle on large farms.
• EnRoute says the new TX2 will allow their Zion drone to fly faster while
still avoiding collisions.
7
8. 21 SP, Inc.
Proprietary and Confidential
Squeezing DL on Mobile Devices 4)
– “The blending of learning algorithms and mobile computing taking place
today is only the beginning.”
– The following commercial entities are making frameworks, tools and
libraries available:
• TensorFlow, Caffe2, SNPE, Compute Library by Google, Facebook,
Qualcomm and ARM, are examples.
– Assuming the models will only be executed (all training done in an off-
device manner), there are still three challenges to running sophisticated
NNs on resource constrained devices:
• Limited memory, computational power and unusually large inference
time.
– A lot of progress has been made in the last eighteen months on phones,
watches, and sensors and in time they will potentially be able to accomplish
control and decision activities as well as other logic-based tasks where the
DL will need to be able to learn and adapt to complexity.
8
9. 21 SP, Inc.
Proprietary and Confidential
Neural Networks 5)
9
- Neural networks are a paradigm for processing information loosely
based on the idea of neurons that communicate information in the brain
and spinal cord. 6)
- DNNs and CNNs “… routinely are composed of thousands of
interconnected units, and millions of parameters.” 18)
10. 21 SP, Inc.
Proprietary and Confidential
Shrinking Software: Quantization 7)
– Current deep neural networks are usually big cloud-based
structures so it is difficult to run them on mobile and in-sensor
devices.
– One way to address this is with special-purpose chips. A second
way, is by creating new smaller representations of deep NNs.
– In order to accomplish this, “ … must encode both the connectivity
pattern of synapses and the weights of those synapses.”
– In this paper, the author’s use quantization theory to develop an
approach that in certain circumstances allows them to calculate the
optimal quantizer.
– This approach can allow them to quantize (i.e., shrink) learned
weights so deep NN models can more easily fit on resource-
constrained devices.
10
11. 21 SP, Inc.
Proprietary and Confidential
TensorFlow’s Eight-Bit Quantization 8)
– In earlier NNs, floating point arithmetic was used. But, today more
efficient inference is needed.
– Quantization is used to describe methods that “…store numbers
and perform calculations on them in more compact formats than
32-bit floating point.”
– By storing only minimum and maximum values and then changing
float values to eight-bit integers, it is possible to reduce a file size
by 75%.
– Utilizing eight-bit reduces power consumption and shortens the
time to process a NN. These characteristics will, in turn, make it
easier to bring intelligent products to the IoT market.
– With TensorFlow, they “… found that we can get extremely good
performance on mobile and embedded devices by using eight-bit
arithmetic rather than floating point.”
11
12. 21 SP, Inc.
Proprietary and Confidential
DeepRebirth: An Acceleration Framework 9)
– Quantization is one way to shrink NN models.
– In this paper, the author’s develop an approach called DeepRebirth
they believe speeds processing much more than other techniques,
such as quantization.
– DeepRebirth reduces the numbers of layers by merging the “…
parameter-free layers with their neighbor convolutional layers to a
single dense layer.”
– The author’s propose two types of merging:
• Streaming merging – layers in a hierarchy are merged and a new
Rebirth Layer is created (processing time reduced:154ms to 17ms)
• Branch merging – parallel branches at the same level are merged
(processing time reduced: 56ms to 21ms).
– Experiments were done on a Samsung Galaxy S5 smartphone.
12
13. 21 SP, Inc.
Proprietary and Confidential
ProjectionNet: A Two NN Architecture 10)
– The author’s propose an approach that utilizes two NNs.
– “The two networks are trained jointly using backpropagation, where
the projection network learns from the full network similar to
apprenticeship learning.”
– Allows distributed training; but, then made to fit on devices and
runs with lower memory and computation costs.
– This method differs from others (e.g., weights quantization) since
here operations and intermediate representations (i.e., hidden
units) are the entities made smaller.
– Experimental Results:
• Handwriting recognition (MNIST) – 92.3% precision with 388x
compression ratio.
• Image classification (CIFAR-100, 50K color images) – 17.8% precision
with 70x compression ratio.
13
14. 21 SP, Inc.
Proprietary and Confidential
CoINF: An Offloading Framework 11)
– CoINF is a new proposed DL framework that allows wearables,
such as smartwatches and smart glasses, to work with
smartphones when making inferences.
– Wearables can capture a wide range of data. Examples, include a
person’s pulse, physical gestures, fitness metrics, and eye tracking.
– Since wearables are extremely resource limited, the author’s
developed a system that offloads DL computation from the
wearable to a local mobile device.
– The system uses TensorFlow and runs on an Android OS for
handhelds and on an Android Wear OS for wearables.
– Experiments show (vs wearable-only and handheld-only) favorable
results:
• 15.9X to 23.0X execution speedup
• 81.3% to 85.5% energy savings
14
15. 21 SP, Inc.
Proprietary and Confidential
NNs in Wireless Communications 12)
– In the wireless communications arena, NNs provide the technology
for two types of applications.
– First, NNs are useful for data analysis, predicting the future and
making inferences.
• In these circumstances, systems can capture data from user behavior,
environmental metrics and other information.
• NNs can then take action based on this information.
– Second, by implementing AI at a network’s edge, self-organizing
network operations can be enabled.
• Edge computing will include placing processors on many components,
including base stations as well as user devices.
• Examples of self-organizing operations include resource management,
data offloading and user association.
15
16. 21 SP, Inc.
Proprietary and Confidential
NNs in Unmanned Aerial Vehicles (UAVs) 12)
– Work is progressing on using UAVs to compliment terrestrial
communications networks by providing wireless service to end
users.
– This technology likely to be used in post-5G cellular networks. 13)
– There are two ways that NNs are likely to be helpful to UAV-based
wireless communications.
– First, reinforcement learning (RL) NNs can “… dynamically adjust
their locations, flying directions, resource allocation decisions, and
path planning to serve their ground users and adapt to the users’
dynamic environment.”
– Second, the ground environment can be mapped by UAVs and
then NNs can make predictions about the user’s behavior. For
example, NNs can predict where the user is moving to.
16
13) 5G available by about 2020. Many possible technologies. Expected to download HD
movie in < 1 second.
17. 21 SP, Inc.
Proprietary and Confidential
NNs for Wireless Virtual Reality (VR) 12)
– Virtual reality is expected to allow users to “… experience and
interact with a wealth of virtual and immersive environments
through a first-person view.”
– Technology allows for surround sound and 360 degree vision.
– NNs are expected to help address problems with VR.
– First, since NNs can predict user’s movements (e.g., head
direction), waste of capacity-limited bandwidth can be avoided.
• Based on vision direction, only the desired image will be
displayed as opposed to the entire 360 degree view.
– Second, cellular networks can have varying quality. NNs can drive
VR image adjustment based on network quality.
17
18. 21 SP, Inc.
Proprietary and Confidential
NNs for the Internet of Things (IoT) 12)
– “In the foreseeable future, it is envisioned that trillions of machine-
type devices such as wearables, sensors, connected vehicles, or
mundane objects will be connected to the Internet, forming a
massive IoT ecosystem .…”
– It’s expected that smart services will be provided; but, massive
connectivity can create bottlenecks.
– NNs are likely to help IoT systems in four ways:
• NNs can use big data analytics to compress massive amounts of data
• Utilizing user and wireless environmental data, RL NNs can self-
organize and, for example, switch frequencies to optimize
communication.
• NNs can analyze sensor data for immediate or future use, perhaps at
off-peak hours.
• Since NNs can predict user behavior, intelligent services may be
offered when, for example, a user leaves work.
18
19. 21 SP, Inc.
Proprietary and Confidential
McKinsey on the IoT 14), 15)
– “The Internet of Things (IoT) … is transforming how we live and
work.”
– It is already providing the following products and services:
• Soil moisture and nutrient data are being transmitted from farms to
experts at distant locations.
• Homeowners are benefitting from IoT-based security systems that
employ very long lasting batteries.
• Its now standard for production-line sensors to notify factory managers
of conditions on the floor.
– This technology is expected to drive economic benefits of $4 to $11
trillion by 2025.
– However, McKinsey’s research suggests that benefits and wide-
spread adoption of IoT applications could materialize more slowly
than many people think.
19
20. 21 SP, Inc.
Proprietary and Confidential
Hype vs Reality Panel 16)
– Many predictions about self-driving cars & drones are irrational.
• “It turns out that much of what appears in mainstream media about
self-driving cars is very much overstated, said Kumar. Fully
autonomous cars are many years away, in his view.”
– In terms of DL, there are constraints, as well per Fung.
• ‘… A deep-learning algorithm that can just do speech recognition,
which is translating what you are saying, has to be trained on millions
of hours of data and uses hugh data farms …. And while a deep-
learning network might have hundreds of thousands of neurons, the
human brain has trillions.”
– At this point, computers can only do narrow tasks.
– Work is underway, however, on “affective computing” (i.e., allowing
machines to pick up on our voices, body language, etc.)
– Extremely hard to do this type of communication, however
20
22. 21 SP, Inc.
Proprietary and Confidential
AI Technologies and Applications 17)
22
1) Boston Consulting Group, September 2017.Source: Boston Consulting Group, September 2017.
23. 21 SP, Inc.
Proprietary and Confidential
End Notes
• 1) Rayna Hollander (2017 Oct 25). Apple Drives Native AI Adoption in Smartphones. Business
Insider, www.businessinsider.com.
• 2) Karen Bajai (2017 Dec 11). 4 Smartphones with Artificial Intelligence to Look for in 2018. The
Economic Times, m.economictimes.com.
• 3) Tekla S. Perry (2017 Mar). Nvidia Wants AI to Get Out of the Cloud and into the Camera, Drone,
or Other Gadget Near You. IEEE Spectrum, spectrum.ieee.org.
• 4) Nicholas D. Lane, et al (2017 Jul-Sep). Squeezing Deep Learning into Mobile and Embedded
Devices. Pervasive Computing, ieeeexplore.ieee.org.
• 5) S. Raschka (2016). What is the Difference Between Deep Learning and Regular Machine
Learning, Kdnuggets, www.kdnuggets.com.
• 6) Geoffrey Hinton, et al (2012 Oct). Neural Networks for Machine Learning course. U of Toronto,
Coursera.com, accessed 2013.
• 7) Avhishek Chatterjee, et al (2017 Aug 15). Towards Optimal Quantization of Neural Networks.
Information Theory (ISIT), 2017 IEEE International Symposium.
• 8) How to Quantize Neural Networks (2017 Nov 2 Update). TeensorFlow, www.tensorflow.org,
accessed Dec 28, 2017.
• 9) Dawei Li, et al (2017 Aug 16). DeepRebirth: A General Approach for Accelerating Deep Neural
Network Execution on Mobile Devices. Under review conference paper, arXiv:1708.04728 [cs.CV],
1708.04728.pdf, accessed Dec 27, 2017.
23
24. 21 SP, Inc.
Proprietary and Confidential
End Notes (cont.)
• 10) Sujith Ravi (2017 Aug 9). ProjectionNet: Learning Efficient On-Device Deep Networks using
Neural Projections. arXiv:1708.00630v2 [cs.LG].
• 11) Mengwei Xu, et al (2017 Dec 1). Enabling Cooperative Inference of Deep Learning on Wearables
and Smartphones. arXiv:1712.03073v1 [cs.CY].
• 12) Mingzhe Chen, et al (2017 Oct 9). Machine Learning for Wireless Networks with Artificial
Intelligence: A Tutorial on Neural Networks. Preprint, arXiv:1710.02913v1 [cs.IT], 1710.02913.pdf, 1-
98. Accessed Nov 1, 2017.
• 13) Amy Nordrum, et al (2017 Jan 27). Everything You Need to Know About 5G. IEEE Spectrum,
spectrum.ieee.org.
• 14) Mark Patel, et al (2017 May). What’s New with the Internet of Things? McKinsey & Company,
www.mckinsey.com.
• 15) Daniel Alsen, et al (2017 Nov). The Future of Connectivity: Enabling the Internet of Things.
McKinsey & Company, www.mckinsey.com.
• 16) (2017 Jul 14). The Future of Artificial Intelligence: Why the Hype Has Outrun Reality. Panel
discussion titled Engineering the Future of Business. Knowledge at Wharton,
knowledge.wharton.upenn.edu.
• 17) Frank Felden, et al (2017 Oct 6). Time to Double Down on AI and Robotics. Boston Consulting
Group. www.bcg.com.
• 18) Nicholas D. Lane, et al (2016 Nov-Dec). DXTK: Enabling Resource-efficient Deep Learning on
Mobile and Embedded Devices with the DeepX Toolkit. MobiCASE 2016 Proceedings of the 8th
EAI
Intl Conference on Mobile Computing, Applications and Services, di.acm.org, 98-107.
24
25. 21 SP, Inc.
Proprietary and Confidential
Contacts
• Jeff Shomaker – Founder/President 21 SP, Inc.
–jshomaker@21spinc.com
–www.21spinc.com
–650-302-7491
• 21 SP, Inc. is a small privately held startup developing and marketing
expert systems-based decision support software to use in genetic-
based personalized medicine. The company's mission is to create
tools that will reduce the use of traditional trial-and-error medicine by
using pharmacogenetics and other evidence-based data, such as the
results of high quality clinical trials, in the medical clinic.
25