Businesses are increasingly adopting AI to create new applications to transform existing operations, driving big data with the growth of IoT and 5G networks and increasing future process complexities for human operators. In this new environment, AI will be needed to write algorithms dynamically to automate the entire programming process. Fortunately, algorithms associated with deep learning are able to achieve enhanced performance with increasing data, unlike the rest associated with machine learning.
Vertex Perspectives | AI Optimized Chipsets | Part IVVertex Holdings
In this instalment, we delve into other emerging technologies including neuromorphic chips and quantum computing systems, to examine their promise as alternative AI-optimized chipsets.
Vertex Perspectives | AI Optimized Chipsets | Part IIVertex Holdings
Deep learning is both computationally and memory intensive, necessitating enhancements in processor performance. In this issue, we explore how this has led to the rise of startups adopting alternative, innovative approaches and how it is expected to pave the way for different types of AI-optimized chipsets.
Vertex Perspectives | AI Optimized Chipsets | Part IIIVertex Holdings
In this instalment, we review the training and inference chipset markets, assess the dominance of tech giants, as well as the startups adopting cloud-first or edge-first approaches to AI-optimized chipsets.
Vertex Perspectives | AI-optimized Chipsets | Part IVertex Holdings
Businesses are increasingly adopting AI to create new applications to transform existing operations, driving big data with the growth of IoT and 5G networks and increasing future process complexities for human operators. In this new environment, AI will be needed to write algorithms dynamically to automate the entire programming process. Fortunately, algorithms associated with deep learning are able to achieve enhanced performance with increasing data, unlike the rest associated with machine learning. To date, deep learning technology has primarily been a software play. Existing processors were not originally designed for these new applications. Hence the need to develop AI-optimized hardware.
How Can AI and IoT Power the Chemical Industry?Xiaonan Wang
AI, IoT and Blockchain tech briefing to the industry to showcase our research at NUS.
by Dr. Xiaonan Wang
Assistant Professor
NUS Department of Chemical & Biomolecular Engineering
Vertex Perspectives | AI Optimized Chipsets | Part IVVertex Holdings
In this instalment, we delve into other emerging technologies including neuromorphic chips and quantum computing systems, to examine their promise as alternative AI-optimized chipsets.
Vertex Perspectives | AI Optimized Chipsets | Part IIVertex Holdings
Deep learning is both computationally and memory intensive, necessitating enhancements in processor performance. In this issue, we explore how this has led to the rise of startups adopting alternative, innovative approaches and how it is expected to pave the way for different types of AI-optimized chipsets.
Vertex Perspectives | AI Optimized Chipsets | Part IIIVertex Holdings
In this instalment, we review the training and inference chipset markets, assess the dominance of tech giants, as well as the startups adopting cloud-first or edge-first approaches to AI-optimized chipsets.
Vertex Perspectives | AI-optimized Chipsets | Part IVertex Holdings
Businesses are increasingly adopting AI to create new applications to transform existing operations, driving big data with the growth of IoT and 5G networks and increasing future process complexities for human operators. In this new environment, AI will be needed to write algorithms dynamically to automate the entire programming process. Fortunately, algorithms associated with deep learning are able to achieve enhanced performance with increasing data, unlike the rest associated with machine learning. To date, deep learning technology has primarily been a software play. Existing processors were not originally designed for these new applications. Hence the need to develop AI-optimized hardware.
How Can AI and IoT Power the Chemical Industry?Xiaonan Wang
AI, IoT and Blockchain tech briefing to the industry to showcase our research at NUS.
by Dr. Xiaonan Wang
Assistant Professor
NUS Department of Chemical & Biomolecular Engineering
Smart Data Slides: Emerging Hardware Choices for Modern AI Data ManagementDATAVERSITY
Leading edge AI applications have always been resource-intensive and known for stretching the limits of conventional (von Neumann architecture) computer performance. Specialized hardware, purpose built to optimize AI applications, is not new. In fact, it should be no surprise that the very first .com internet domain was registered to Symbolics - a company that built the Lisp Machine, a dedicated AI workstation - in 1985. In the last three decades, of course, the performance of conventional computers has improved dramatically with advances in chip density (Moore’s Law) leading to faster processor speeds, memory speeds, and massively parallel architectures. And yet, some applications - like machine vision for real time video analysis and deep machine learning - always need more power.
Participants in this webinar will learn the fundamentals of the three hardware approaches that are receiving significant investments and demonstrating significant promise for AI applications.
- neuromorphic/neurosynaptic architectures (brain-inspired hardware)
- GPUs (graphics processing units, optimized for AI algorithms), and
- quantum computers (based on principles and properties of quantum-mechanics rather than binary logic).
Note - This webinar requires no previous knowledge of hardware or computer architectures.
Edge AI Framework for Healthcare ApplicationsDebmalya Biswas
Edge AI enables intelligent solutions to be deployed on edge devices, reducing latency, allowing offline execution, and providing strong privacy guarantees. Unfortunately, achieving efficient and accurate execution of AI algorithms on edge devices, with limited power and computational resources, raises several deployment challenges. Existing solutions are very specific to a hardware platform/vendor. In this work, we present the MATE framework that provides tools to (1) foster model-to-platform adaptations, (2) enable validation of the deployed models proving their alignment with the originals, and (3) empower engineers and architects to do it efficiently using repeated, but rapid development cycles. We finally show the practical utility of the proposal by applying it on a real-life healthcare body-pose estimation app.
Things like growing volumes and varieties of available data, cheaper and more powerful computational processing, data storage and large-value predictions that can guide better decisions and smart actions inreal time without human intervention are playing critical role in this age. All of these require models thatcan automatically analyse large complex data and deliver quick accurate results – even on a very largescale. Machine learning plays a significant role in developing these models. The applications of machinelearning range from speech and object recognition to analysis and prediction of finance markets. Artificial Neural Network is one of the important algorithms of machine learning that is inspired by the structure and functional aspects of the biological neural networks. In this paper, we discuss the purpose, representationand classification methods for developing hardware for machine learning with the main focus on neuralnetworks. This paper also presents the requirements, design issues and optimization techniques for buildinghardware architecture of neural networks.
Things like growing volumes and varieties of available data, cheaper and more powerful computational processing, data storage and large-value predictions that can guide better decisions and smart actions in real time without human intervention are playing critical role in this age. All of these require models that can automatically analyse large complex data and deliver quick accurate results – even on a very large scale. Machine learning plays a significant role in developing these models. The applications of machine learning range from speech and object recognition to analysis and prediction of finance markets. Artificial Neural Network is one of the important algorithms of machine learning that is inspired by the structure and functional aspects of the biological neural networks. In this paper, we discuss the purpose, representation and classification methods for developing hardware for machine learning with the main focus on neural networks. This paper also presents the requirements, design issues and optimization techniques for building hardware architecture of neural networks.
Phoenix Data Conference - Big Data Analytics for IoT 11/4/17Mark Goldstein
“Big Data for IoT: Analytics from Descriptive to Predictive to Prescriptive” was presented to the Phoenix Data Conference on 11/4/17 at Grand Canyon University.
As the Internet of Things (IoT) floods data lakes and fills data oceans with sensor and real-world data, analytic tools and real-time responsiveness will require improved platforms and applications to deal with the data flow and move from descriptive to predictive to prescriptive analysis and outcomes.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/intel/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-manovich
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Nikita Manovich, Senior Software Engineer at Intel, presents the "Data Annotation at Scale: Pitfalls and Solutions" tutorial at the May 2019 Embedded Vision Summit.
In many real-world use cases, deep learning algorithms work well if you have enough high-quality data to train them. Obtaining that data is a critical limiting factor in the development of effective artificial intelligence.
In this talk, Manovich identifies common pitfalls encountered in obtaining and using public and private data for training and evaluating deep neural networks for visual AI—and presents techniques to overcome these pitfalls. He also presents the open source Computer Vision Annotation Tool (CVAT) (github.com/opencv/cvat), illustrating techniques his company has implemented to streamline annotation of visual data at scale. He discusses challenges faced in developing CVAT, how they were addressed, and plans for further improvements.
This talk was presented in Startup Master Class 2017 - http://aaiitkblr.org/smc/ 2017 @ Christ College Bangalore. Hosted by IIT Kanpur Alumni Association and co-presented by IIT KGP Alumni Association, IITACB, PanIIT, IIMA and IIMB alumni.
My co-presenter was Biswa Gourav Singh. And contributor was Navin Manaswi.
http://dataconomy.com/2017/04/history-neural-networks/ - timeline for neural networks
Artificial intelligence is part of almost every business today; it facilitates business operations, increases productivity, and offers a variety of ways to speed up communication processes. Artificial intelligence and software (or software applications installed on it), as well as automation through AI systems, perform many of the tasks previously performed by employees and workers. Switching to an automated working environment has resulted in a lot of unnecessary business expenses, substantial time savings and a gradual increase in profits. The automation through AI of various business processes has taken many companies and organizations to the next level in terms of production and management.So, this article explains the role of artificial intelligence, machine learning and cloud computing in business. by Dr. Pawan Whig 2019. Artificial Intelligence and Machine Learning In Business. International Journal on Integrated Education. 2, 2 (Jun. 2019). https://journals.researchparks.org/index.php/IJIE/article/view/516/493 https://journals.researchparks.org/index.php/IJIE/article/view/516
The talk was given at OReilly Strata Data Conference September 2018 in NYC
All the conferences and thought leaders have been painting a vision of the businesses of the future being powered by data, but if we’re honest with ourselves, the vast majority of our massive data science investments are being deployed to PowerPoint or maybe a business dashboard. Productionizing your machine learning (ML) portfolio is the next big step on the path to ROI from AI.
You probably started out years ago on a “big data” initiative: You collected and cleaned your data and built data warehouses, and when those filled up you upgraded to data lakes. You hired data engineers and data scientists, and around the organization, everyone brushed up their SQL querying skills and got some licenses to Tableau and PowerBI.
Then you saw what Google, Uber, Facebook, and Amazon were doing with machine learning to automate business processes and customer interactions. To not get broadsided, you hired more data scientists and machine learning engineers. They were put on your teams and started using your big data investments to train models. But what you probably found is that your tech stack and DevOps processes don’t fit ML models. Unlike most of your systems, ML models require short spikes of massive compute; they are often written in different languages than your core code; they need different hardware to perform well; one model probably has applications across many teams; and the people making the models often don’t have the engineering experience to write production code but need to iterate faster than traditional engineers. Expecting your engineering and DevOps teams to deploy ML models well is like showing up to Seaworld with a giraffe since they are already handling large mammals.
There is a path forward. Almost five years ago Algorithmia launched a marketplace for models, functions, and algorithms. Today 65,000 developers are on the platform deploying 4,500 models—the result has been a layer of tools and best practices to make deploying ML models frictionless, scalable, and low maintenance. The company refers to it as the “AI layer.”
Drawing on this experience, Diego Oppenheimer covers the strategic and technical hurdles each company must overcome and the best practices developed while deploying over 4,000 ML models for 70,000 engineers.
Topics include:
Best practices for your organization
Continuous model deployment
Varying languages (Your code base probably isn’t in Python or R, but your ML models probably are.)
Managing your portfolio of ML models
Standardize versioning
Enabling models across your organization
Analytics on how and where models are being used
Maintaining auditability
Driving Computer Vision Research Innovation In Artificial IntelligenceNVIDIA
Get a recap of the news out of NVIDIA's announcements at CVPR 2017 with highlights such as our V100 giveaway to top researchers, technical demos, workshops, and more.
IoT Slam Keynote: Harnessing the Flood of Data with Heterogeneous Computing a...Ryft
This presentation was delivered as the closing keynote for the 2015 IoT Slam virtual conference. During the presentation, Ryft VP of Engineering, Pat McGarry, took a close look at how the IoT revolution is changing data analytics and driving the move of data analysis to the network’s edge where the data is being created. - See more at: http://www.ryft.com/blog/2015-iot-slam-keynote-harnessing-flood-of-iot-data-with-heterogenenous-computing-at-the-edge#sthash.x1Anoapb.dpuf
Industry pundits are predicting up to 50 billion connected devices by 2020, generating more data than in all of human history to date and connected via ubiquitous, connectivity such as 5G, Sigfox and NBIoT. With this comes the promise of business opportunities to deploy your Internet of Things solution. Ganga will walk you through the trends in computing that you need to be aware of, how you can get started and how working with Intel can accelerate your development and time to market.
Speaker: Ganga Varatharajan, IoT & New Technologies Manager, Intel
Smart Data Slides: Emerging Hardware Choices for Modern AI Data ManagementDATAVERSITY
Leading edge AI applications have always been resource-intensive and known for stretching the limits of conventional (von Neumann architecture) computer performance. Specialized hardware, purpose built to optimize AI applications, is not new. In fact, it should be no surprise that the very first .com internet domain was registered to Symbolics - a company that built the Lisp Machine, a dedicated AI workstation - in 1985. In the last three decades, of course, the performance of conventional computers has improved dramatically with advances in chip density (Moore’s Law) leading to faster processor speeds, memory speeds, and massively parallel architectures. And yet, some applications - like machine vision for real time video analysis and deep machine learning - always need more power.
Participants in this webinar will learn the fundamentals of the three hardware approaches that are receiving significant investments and demonstrating significant promise for AI applications.
- neuromorphic/neurosynaptic architectures (brain-inspired hardware)
- GPUs (graphics processing units, optimized for AI algorithms), and
- quantum computers (based on principles and properties of quantum-mechanics rather than binary logic).
Note - This webinar requires no previous knowledge of hardware or computer architectures.
Edge AI Framework for Healthcare ApplicationsDebmalya Biswas
Edge AI enables intelligent solutions to be deployed on edge devices, reducing latency, allowing offline execution, and providing strong privacy guarantees. Unfortunately, achieving efficient and accurate execution of AI algorithms on edge devices, with limited power and computational resources, raises several deployment challenges. Existing solutions are very specific to a hardware platform/vendor. In this work, we present the MATE framework that provides tools to (1) foster model-to-platform adaptations, (2) enable validation of the deployed models proving their alignment with the originals, and (3) empower engineers and architects to do it efficiently using repeated, but rapid development cycles. We finally show the practical utility of the proposal by applying it on a real-life healthcare body-pose estimation app.
Things like growing volumes and varieties of available data, cheaper and more powerful computational processing, data storage and large-value predictions that can guide better decisions and smart actions inreal time without human intervention are playing critical role in this age. All of these require models thatcan automatically analyse large complex data and deliver quick accurate results – even on a very largescale. Machine learning plays a significant role in developing these models. The applications of machinelearning range from speech and object recognition to analysis and prediction of finance markets. Artificial Neural Network is one of the important algorithms of machine learning that is inspired by the structure and functional aspects of the biological neural networks. In this paper, we discuss the purpose, representationand classification methods for developing hardware for machine learning with the main focus on neuralnetworks. This paper also presents the requirements, design issues and optimization techniques for buildinghardware architecture of neural networks.
Things like growing volumes and varieties of available data, cheaper and more powerful computational processing, data storage and large-value predictions that can guide better decisions and smart actions in real time without human intervention are playing critical role in this age. All of these require models that can automatically analyse large complex data and deliver quick accurate results – even on a very large scale. Machine learning plays a significant role in developing these models. The applications of machine learning range from speech and object recognition to analysis and prediction of finance markets. Artificial Neural Network is one of the important algorithms of machine learning that is inspired by the structure and functional aspects of the biological neural networks. In this paper, we discuss the purpose, representation and classification methods for developing hardware for machine learning with the main focus on neural networks. This paper also presents the requirements, design issues and optimization techniques for building hardware architecture of neural networks.
Phoenix Data Conference - Big Data Analytics for IoT 11/4/17Mark Goldstein
“Big Data for IoT: Analytics from Descriptive to Predictive to Prescriptive” was presented to the Phoenix Data Conference on 11/4/17 at Grand Canyon University.
As the Internet of Things (IoT) floods data lakes and fills data oceans with sensor and real-world data, analytic tools and real-time responsiveness will require improved platforms and applications to deal with the data flow and move from descriptive to predictive to prescriptive analysis and outcomes.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/intel/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-manovich
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Nikita Manovich, Senior Software Engineer at Intel, presents the "Data Annotation at Scale: Pitfalls and Solutions" tutorial at the May 2019 Embedded Vision Summit.
In many real-world use cases, deep learning algorithms work well if you have enough high-quality data to train them. Obtaining that data is a critical limiting factor in the development of effective artificial intelligence.
In this talk, Manovich identifies common pitfalls encountered in obtaining and using public and private data for training and evaluating deep neural networks for visual AI—and presents techniques to overcome these pitfalls. He also presents the open source Computer Vision Annotation Tool (CVAT) (github.com/opencv/cvat), illustrating techniques his company has implemented to streamline annotation of visual data at scale. He discusses challenges faced in developing CVAT, how they were addressed, and plans for further improvements.
This talk was presented in Startup Master Class 2017 - http://aaiitkblr.org/smc/ 2017 @ Christ College Bangalore. Hosted by IIT Kanpur Alumni Association and co-presented by IIT KGP Alumni Association, IITACB, PanIIT, IIMA and IIMB alumni.
My co-presenter was Biswa Gourav Singh. And contributor was Navin Manaswi.
http://dataconomy.com/2017/04/history-neural-networks/ - timeline for neural networks
Artificial intelligence is part of almost every business today; it facilitates business operations, increases productivity, and offers a variety of ways to speed up communication processes. Artificial intelligence and software (or software applications installed on it), as well as automation through AI systems, perform many of the tasks previously performed by employees and workers. Switching to an automated working environment has resulted in a lot of unnecessary business expenses, substantial time savings and a gradual increase in profits. The automation through AI of various business processes has taken many companies and organizations to the next level in terms of production and management.So, this article explains the role of artificial intelligence, machine learning and cloud computing in business. by Dr. Pawan Whig 2019. Artificial Intelligence and Machine Learning In Business. International Journal on Integrated Education. 2, 2 (Jun. 2019). https://journals.researchparks.org/index.php/IJIE/article/view/516/493 https://journals.researchparks.org/index.php/IJIE/article/view/516
The talk was given at OReilly Strata Data Conference September 2018 in NYC
All the conferences and thought leaders have been painting a vision of the businesses of the future being powered by data, but if we’re honest with ourselves, the vast majority of our massive data science investments are being deployed to PowerPoint or maybe a business dashboard. Productionizing your machine learning (ML) portfolio is the next big step on the path to ROI from AI.
You probably started out years ago on a “big data” initiative: You collected and cleaned your data and built data warehouses, and when those filled up you upgraded to data lakes. You hired data engineers and data scientists, and around the organization, everyone brushed up their SQL querying skills and got some licenses to Tableau and PowerBI.
Then you saw what Google, Uber, Facebook, and Amazon were doing with machine learning to automate business processes and customer interactions. To not get broadsided, you hired more data scientists and machine learning engineers. They were put on your teams and started using your big data investments to train models. But what you probably found is that your tech stack and DevOps processes don’t fit ML models. Unlike most of your systems, ML models require short spikes of massive compute; they are often written in different languages than your core code; they need different hardware to perform well; one model probably has applications across many teams; and the people making the models often don’t have the engineering experience to write production code but need to iterate faster than traditional engineers. Expecting your engineering and DevOps teams to deploy ML models well is like showing up to Seaworld with a giraffe since they are already handling large mammals.
There is a path forward. Almost five years ago Algorithmia launched a marketplace for models, functions, and algorithms. Today 65,000 developers are on the platform deploying 4,500 models—the result has been a layer of tools and best practices to make deploying ML models frictionless, scalable, and low maintenance. The company refers to it as the “AI layer.”
Drawing on this experience, Diego Oppenheimer covers the strategic and technical hurdles each company must overcome and the best practices developed while deploying over 4,000 ML models for 70,000 engineers.
Topics include:
Best practices for your organization
Continuous model deployment
Varying languages (Your code base probably isn’t in Python or R, but your ML models probably are.)
Managing your portfolio of ML models
Standardize versioning
Enabling models across your organization
Analytics on how and where models are being used
Maintaining auditability
Driving Computer Vision Research Innovation In Artificial IntelligenceNVIDIA
Get a recap of the news out of NVIDIA's announcements at CVPR 2017 with highlights such as our V100 giveaway to top researchers, technical demos, workshops, and more.
IoT Slam Keynote: Harnessing the Flood of Data with Heterogeneous Computing a...Ryft
This presentation was delivered as the closing keynote for the 2015 IoT Slam virtual conference. During the presentation, Ryft VP of Engineering, Pat McGarry, took a close look at how the IoT revolution is changing data analytics and driving the move of data analysis to the network’s edge where the data is being created. - See more at: http://www.ryft.com/blog/2015-iot-slam-keynote-harnessing-flood-of-iot-data-with-heterogenenous-computing-at-the-edge#sthash.x1Anoapb.dpuf
Industry pundits are predicting up to 50 billion connected devices by 2020, generating more data than in all of human history to date and connected via ubiquitous, connectivity such as 5G, Sigfox and NBIoT. With this comes the promise of business opportunities to deploy your Internet of Things solution. Ganga will walk you through the trends in computing that you need to be aware of, how you can get started and how working with Intel can accelerate your development and time to market.
Speaker: Ganga Varatharajan, IoT & New Technologies Manager, Intel
Building a reliable and scalable IoT platform with MongoDB and HiveMQDominik Obermaier
Today’s Internet of Things (IoT) is enabling companies to blend together the physical and digital worlds, creating new business models and generating insights that increase productivity at once unimaginable levels. However, managing the ever growing volume of heterogeneous IoT data from disparate devices, systems and applications both on premise and in the cloud can be a challenging endeavour without a scalable and reliable IoT platform.
In this webinar, we will explore why and how companies are leveraging HiveMQ and MongoDB to build exactly that: a scalable and reliable IoT platform. Based upon a sample fleet management scenario, we will explain how telematics data can be routed via MQTT and efficiently stored to provide analytics and insights into the data.
Key Learnings
- Common challenges and pitfalls of IoT projects
- Required components for effectively handling data with an IoT platform
- HiveMQ for MQTT to enable bi-directional device communication over unstable networks
- MongoDB as the flexible and scalable modern data platform combining data from different sources and powering your applications
- Why MongoDB and HiveMQ is such a great combination
Bhadale group of companies manufacturing programs catalogueVijayananda Mohire
This is our offering for the manufacturing industry. We offer AI Smart services that help the industry in adopting AI and digital technologies. We offer non-invasive standalone and overlay solutions on existing frameworks.
Startup pitch presented by co-founder and CEO Jaco Els. Cubitic offers a predictive analytics platform that allows developers to build custom solutions for analytics and visualisation on top of a machine learning engine.
The idea that the Internet of Things (IoT) and the Industrial Internet of Things (IIoT) are fads is quickly fading. Today, the real challenge is how to implement a solution that can adjust to the volatility of a rapidly changing market. The round-table features how does IIoT trends help to illustrate on using a platform-based approach to address such challenges.
AI for Manufacturing (Machine Vision, Edge AI, Federated Learning)byteLAKE
This is the extended presentation about byteLAKE's and Lenovo's Artificial Intelligence solutions for Manufacturing.
Topics covered: AI strategy for manufacturing, Edge AI, Federated Learning and Machine Vision.
It's the first publication in the upcoming series: AI for Manufacturing. Highlights: AI-assisted quality monitoring automation, AI-assisted production line monitoring and issues detection, AI-assisted measurements, Intelligent Cameras and many more. Reach out to us to learn more: welcome@byteLAKE.com.
Presented during the world's first Federated Learning conference (Jun'20). Recording: https://youtu.be/IMqRIi45dDA
Related articles:
- Revolution in factories: Industry 4.0.
https://medium.com/@marcrojek/revolution-in-factories-industry-4-0-conference-made-in-wroclaw-2020-translation-ae96e5e14d55
- Cognitive Automation helps where RPAs fall short.
https://medium.com/@marcrojek/cognitive-automation-helps-where-rpas-fall-short-a1c5a01a66f8
- Machine Vision, how AI brings value to industries.
https://medium.com/@marcrojek/machine-vision-how-ai-brings-value-to-industries-e6a4f8e56f42
Learn more:
- https://www.bytelake.com/en/cognitive-services/
- https://www.lenovo.com/ai
- https://federatedlearningconference.com/
Overcoming the AIoT Obstacles through Smart Component IntegrationInnodisk Corporation
Enterprises in every industry are gearing up for AI’s integration with IoT at the edge. Analytics and cloud-based applications are crucial foundations for the AIoT infrastructure. But even more importantly, AIoT requires complete, real-time access to the data in fulfill the needs of highly responsive edge computing applications.
In our experience, many customers are facing the same difficulties with regards to cyber level and physical level device integration in the new AI era. As the world's leading industrial storage and memory provider, Innodisk has a solid track record with more than 2000 customers, and expertise built on more than a decade of integration of hardware, firmware and software solutions.
Attend this webinar to learn about:
- Preparing your business for the new Internet of Things (IoT) an AI era
- How do we Overcome the Current Architectural Issues?
- Increasing process efficiency and delivering a better customer experience
- Facilitating new platforms that enable rapid development of next generation intelligent IoT systems
- Trends and technology in AIoT intelligent storage/ data optimization
Powering the Internet of Things with Apache HadoopCloudera, Inc.
Without the right data management strategy, investments in Internet of Things (IoT) can yield limited results. Apache Hadoop has emerged as a key architectural component that can help make sense of IoT data, enabling never before seen data products and solutions.
Similar to Vertex perspectives ai optimized chipsets (part i) (20)
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
2. Source: When Moore’s Law Met AI by Azeem on Medium | icons8
Applications
Data
Algorithms
Compute
Businesses are increasingly adopting AI to create new
applications to transform existing operations. These include
connected devices, autonomous vehicles, on-device personal
interfaces, voice interactions and AR.
Applications
Data
AI Algorithms
Computing
Hardware
Up to 30 billion more IoT devices are
coming online by 2020, streaming data
that helps build smarter objects, homes,
inform consumer lifestyle, enhance
security and energy management.
Most breakthrough approaches in deep learning use
significant computing power. A neural net might have dozens
of connected layers and billions of parameters, requiring a
step-wise increase in level of computing power.
This positive, recursive ADAC loop where
new applications generate more data, in
turn enhancing algorithmic complexity,
driving demand for higher computing
performance.
1
2
3
4
Businesses are increasingly adopting AI to create new
applications, driving the development of AI-optimized chips
The ADAC (Applications – Data – Algorithms – Computing Hardware) Loop
3. Software
SoftwareHardware
Autonomous
Driving
Speech Recognition & NLP
Computer
Vision
Sensors
Business
Intelligence
AI Platform
Data
Path Planning
AI-Optimized
Chipsets
Industrial Applications Robotics
Computing
• To date, deep learning technology
has primarily been a software play.
• Existing processors were not originally
designed for these new applications.
• Hence the need to develop AI-
optimized hardware.
These new applications are built on other technology and
infrastructure layer solutions
Source: Vertex | AutomotiveIQ | Icon 8 | Taranis | Kryon Systems | Horizon Robotics
• Sensing uses advanced computer
vision and perception.
• Visual tasks including lane detection,
pedestrian detection, road signs
recognition and blind-spot
monitoring are handled more
effectively with deep learning.
Path planning: Simple machine learning algorithms are sufficient to handle driving in high resolution mapped cities or along
fixed routes. Deep learning is more suitable in complex situations, (e.g. multiple unknown destinations or changing routes).
Taranis offers a comprehensive and affordable crop
management solution, and the pest and disease
prediction algorithms using deep learning to
continually improve accuracy.
Kryon Systems delivers innovative, intelligent
Robotic Process Automation (RPA) solutions
using patented visual and deep learning
technologies.
Horizon Robotics is the leader of embedded AI with
leading technologies in autonomous driving perception
and decision-making, deep learning algorithms and AI
processor architecture.
Examples of Vertex Portfolio Companies that employ deep learning in their solutions
Hardware
ApplicationTechnologyInfrastructure
4. Edge Resident Hybrid Solutions Cloud Hosted
Consumer | Retail
• Gaming
• Smart Displays
• Personal Assistants • Ad Targeting & E-Commerce
Transportation • Autonomous Vehicles
• Transportation & Grid
Control
• Traffic & Network Analytics
Enterprise
• Delivery Drone
• Warehouse Robots
• Cyber Security
• Sales, Marketing & Customer
Services
Commodities • Field Drones & Robots
• Climate, Water
• Energy & Flow Control
• Field Sensor Data Analytics
Industrial | Military
• Cobots
• Unmanned Systems
• Factory Control &
Surveillance
• Factory & Operations Analytics
Healthcare
• Medical Imaging
• Surgical Robots
• Medical Diagnostics • Clinical Analytics
That may reside in the cloud, on edge devices or in a
hybrid environment
Source: : Moor Insight & Strategy
5. Autonomous Vehicles
• In an autonomous car, cameras will generate
between 20–60 MB/s, radar upwards of 10 KB/s,
sonar 10–100 KB/s, GPS will run at 50 KB/s, and
LIDAR will range between 10–70 MB/s.
• Each autonomous vehicle will be generating
approximately 8GB/s, 4TB per day.
• Autonomous vehicles require a reliable solution with
an ultra-low latency of 1ms.
Agriculture
• Descartes Labs uses deep learning to process satellite imagery
for agricultural forecasts.
• It processes over 5TB of new data every day and references
a library of 3PB of archival satellite images.
• By using real time satellite imagery and weather models,
Descartes Labs provides highly accurate weekly forecasts of
US corn production compared to monthly forecasts provided
by the US Department of Agriculture.
Source: NovAtel Source: Descartes Labs
And all point to significantly higher data generation
Source: : Intel | IEEE Spectrum, | Deep Learning: An Artificial Intelligence Revolution by Ark Invest | Descartes Labs | Reducing 5G Latency Benefits Automotive Safety by Bill McKinley | NovAtel
6. 500-1000 ms 200 ms 100 ms 1 ms
100KB/s 384KB/s-2MB/s 150KB/s-450MB/s 10 GB/s
2G
GSM | GPRS
EDGE | CDMA
1990- 2000
3G
UMTS
CDMA 2000
2000-2010
4G
LTE
LTE-A
2010-2020
5G
>2020
Source: Wi360
50B
Number of
IoT devices
by 2020
Top IOT Applications
Smart Home Wearables Connected
Industries
Connected
Car
Smart City Smart Energy
The 5G Evolution: Latency for Different Generations of Cellular Networks
Coupled with the growth of IoT and 5G networks, a data
deluge of high volume, velocity and variety is expected
IoT and Exponential Growth in Devices
The growth of IoT and 5G networks
expected to generate a data deluge of high
volume, velocity and variety
Source: Gartner
Volume Velocity Variety
Source: : IoT Analytics | Intel | IEEE Spectrum, | Deep Learning: An Artificial Intelligence Revolution by Ark Invest | Wi360 | icon8
Source: World Economic Forum
7. • Compounding the power of deep learning, the neural nets themselves have become larger and more sophisticated, as measured
by their number of free “parameters”.
• Parameters are dials used to tune the network’s performance. Generally, more parameters allow a network to express more states
and capture more data.
• It endows computers with previously unimaginable capabilities - understanding photos, translating language, predicting crop
yields, diagnosing diseases etc. Enabling AI to write software to automate business processes that humans are unable to
write.
Source: Andrew Ng, Ark Invest
Unlike other machine learning algorithms, those associated
with deep learning scale with increasing training data
“The process could be very
complicated…As a result of this
observation, the AI software writes
an AI software to automate that
business process. Because we won’t
be able to do it. It’s too
complicated...
For the next couple of decades, the
greatest contribution of A.I. is
writing software that humans
simply can’t write. Solving the
unsolvable problems.”
Jensen Huang
CEO | NVIDIA
Source: Inside Microsoft's FPGA-Based Configurable Cloud by CTO Mark Russinovich | Nvidia | Deep Learning: An Artificial Intelligence Revolution by Ark Invest | Fortune
8. Deep Learning vs. Other Programming Techniques
Given future process complexities, AI will be needed to
automate the programming process by coding dynamically
Source: Ark Invest Management LLC, Yoshua Bengio
OutputInput
Output
…
Input
Data Trained Program
Input Output
Hand Crafted Program
1980s Classic Programming
• Software developer codes the solution in software, which
then gets executed in a deterministic and obtuse fashion.
• This works for simple, well-defined problems but breaks down
for more complex tasks.
2000s Machine Learning
• Improves upon classic programming by replacing some stages
of the program with stages that can be trained
automatically with data
• Enabling computers to perform more complex tasks (e.g.
image and voice recognition).
• The software developer focuses less on coding, more on
building models which require enormous datasets to
recommend a best output.
2010s Deep Learning
Entire program is replaced with stages that can be trained with
data
• Programs can be far more capable and accurate.
• Requires less human effort to create.
Source: Deep Learning: An Artificial Intelligence Revolution by Ark Invest | icon8
9. Source: Morningstar | Vertex | icon8
But existing processors were not originally designed for new AI
applications. Hence the need to develop AI-optimized hardware
Strengths Limitations Training Rank Inference Rank Leading Vendors
• General-purpose, in
servers and PCs
• Sufficient for
inference
• Serial-processing is
less efficient than
parallel-processing
• Highly parallel, high
performance
• Uses popular AI
framework (CUDA)
• Less efficient than
FPGAs
• Scalability
• Inefficient unless fully
utilised
• Reconfigurable
• Good for constantly
evolving workloads
• Efficient
• Difficult to program,
• Lower performance
versus GPUs
• No major AI
framework
• Best performance,
• Most energy and
cost efficient
• Fully customizable
• Long development
cycle
• Requires high volume
to be practical
• Quickly outdated,
inflexible
2 2
13
1 3
N.A. N.A.
10. Looking ahead
This is the end of Part I of a 4-part series of Vertex Perspectives that seeks to understand key factors
driving innovation for AI-optimized chipsets, their industry landscape and development trajectory.
In Part II, we review the shift in performance focus of computing from general application to neural nets
and how this is driving demand for high performance computing. To this end, some startups are
adopting alternative, novel approaches and this is expected to pave the way for other AI-optimized
chipsets.
In Part III, we assess the dominance of tech giants in the cloud, coupled with disruptive startups adopting
cloud-first or edge-first approaches to AI-optimized chips. Most industry players are expected to focus
on the cloud, with ASIC startups featuring prominently in the cloud and at the edge.
Finally in Part IV, we look at other emerging technologies including neuromorphic chips and quantum
computing systems, to explore their promise as alternative AI-optimized chipsets.
We are most grateful to Emmanuel Timor (General Partner, Vertex Ventures Israel) and Sandeep Bhadra
(Partner, Vertex Ventures US) for their insightful comments on this publication.
Do let us know if you would like to subscribe to future Vertex Perspectives.
Source: Vertex
11. Disclaimer
This presentation has been compiled for informational purposes only. It does not constitute a recommendation to any party. The presentation relies on data and
insights from a wide range of sources including public and private companies, market research firms, government agencies and industry professionals. We cite
specific sources where information is public. The presentation is also informed by non-public information and insights.
Information provided by third parties may not have been independently verified. Vertex Holdings believes such information to be reliable and adequately
comprehensive but does not represent that such information is in all respects accurate or complete. Vertex Holdings shall not be held liable for any information
provided.
Any information or opinions provided in this report are as of the date of the report and Vertex Holdings is under no obligation to update the information or
communicate that any updates have been made.
About Vertex Holdings
Vertex Holdings, a member of Temasek Holdings, focuses on venture capital investment opportunities in the information
technology and healthcare markets, primarily through our global family of direct investment venture funds. Headquartered
in Singapore, we collaborate with a network of global investors who specialize in local markets. The Vertex Global Network
encompasses Silicon Valley, China, Israel, India, Taiwan and Southeast Asia.
Authors
Yanai ORON
General Partner
Vertex Ventures Israel
yanai@vertexventures.com
XIA Zhi Jin
Partner
Vertex Ventures China
xiazj@vertexventures.com
Brian TOH
Director
Vertex Holdings
btoh@vertexholdings.com
Tracy JIN
Director
Vertex Holdings
tjin@vertexholdings.com
ZHAO Yu Jie
Associate Investment Director
Vertex Ventures China
zhaoyj@vertexventures.com