For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/03/market-analysis-on-socs-for-imaging-vision-and-deep-learning-in-automotive-and-mobile-markets-a-presentation-from-yole-developpement/
For more information about edge AI and vision, please visit:
http://www.edge-ai-vision.com
John Lorenz, Market and Technology Analyst for Computing and Software at Yole Développement, delivers the presentation “Market Analysis on SoCs for Imaging, Vision and Deep Learning in Automotive and Mobile Markets” at the Edge AI and Vision Alliance’s March 2020 Vision Industry and Technology Forum. Lorenz presents Yole Développement’s latest analysis on the evolution of SoCs for imaging, vision and deep learning.
Artificial Intelligence Computing for Automotive 2019 Report by Yole Développ...Yole Developpement
Artificial Intelligence for automotive: why you should care.
More information on that report at: https://www.i-micronews.com/produit/artificial-intelligence-computing-for-automotive-2019/
LiDAR for Automotive and Industrial Applications 2019 by Yole DéveloppementYole Developpement
Is rationalization happening in the LiDAR market?
More information on: https://www.i-micronews.com/produit/lidar-for-automotive-and-industrial-applications-2019/
For the full video of this presentation, please visit:
https://www.embedded-vision.com/industry-analysis/video-interviews-demos/2d-and-3d-sensing-markets-applications-and-technologies-pre
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Guillaume Girardin, Photonics, Sensing and Display Division Director at Yole Développement, delivers the presentation "2D and 3D Sensing: Markets, Applications, and Technologies" at the Embedded Vision Alliance's September 2019 Vision Industry and Technology Forum. Girardin details the optical depth sensor market and application trends.
Sensors and Data Management for Autonomous Vehicles report 2015 by Yole Devel...Yole Developpement
Multiple sensing technologies will ensure many market opportunities for Tier 1 players, Tier 2 players, and newcomers alike
Sensor technologies are a driving force in making fully autonomous vehicles a reality. Automakers are racing to develop safe self-driving cars, but this race is a distance run more than a sprint, where multiple automation stages will imply multiple sensors. Ultrasonic sensors, radars, and multiple cameras systems are already embedded in high-end vehicles -- and within 10 years, they could also include long-range cameras, LIDAR, micro bolometer and accurate dead reckoning. These devices will work concurrently and each technology will support another to ensure codependency and avoid concerns. Even though sensors are only part of the puzzle, their market opportunities are promising.
Artificial Intelligence Computing for Consumer 2019 report by Yole Développem...Yole Developpement
While AI is a feature expected in smartphones, this fantastic technology has spread like wildfire to the smart home ecosystem and is profoundly impacting the semiconductor industry.
More information on https://www.i-micronews.com/products/artificial-intelligence-computing-for-consumer-2019/
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/09/3d-sensing-market-and-industry-update-a-presentation-from-the-yole-group/
Florian Domengie, Senior Technology and Market Analyst at Yole Intelligence (part of the Yole Group), presents the “3D Sensing: Market and Industry Update” tutorial at the May 2023 Embedded Vision Summit.
While the adoption of mobile 3D sensing has slowed in Android phones, the market has still been growing fast, thanks to Apple. Apple is continuing to adopt 3D cameras for iPhones in both front and rear. Along the way, Apple has updated face ID and simplified and shrunk 3D camera optical structures. Meanwhile, due to Android phone OEMs mostly choosing not to incorporate 3D cameras, sensor suppliers and integrators have had to work hard to open up other consumer markets.
In addition to consumer markets, the use of 3D sensing has been blossoming in markets such as the industrial market and the nascent automotive market, where 3D sensing is increasingly used for advanced driver assistance systems and driver monitoring systems. In this talk, Domengie provides an overview of the main application, market, industry and technology trends of the 3D sensing industry.
Radar and Wireless for Automotive: Market and Technology Trends 2019 report b...Yole Developpement
The radar and 5G/V2X markets will both grow – one through market pull, the other through prospective enablement.
More information on https://www.i-micronews.com/products/radar-and-v2x-for-automotive-technologies-and-market-trends-2019/
Artificial Intelligence Computing for Automotive 2019 Report by Yole Développ...Yole Developpement
Artificial Intelligence for automotive: why you should care.
More information on that report at: https://www.i-micronews.com/produit/artificial-intelligence-computing-for-automotive-2019/
LiDAR for Automotive and Industrial Applications 2019 by Yole DéveloppementYole Developpement
Is rationalization happening in the LiDAR market?
More information on: https://www.i-micronews.com/produit/lidar-for-automotive-and-industrial-applications-2019/
For the full video of this presentation, please visit:
https://www.embedded-vision.com/industry-analysis/video-interviews-demos/2d-and-3d-sensing-markets-applications-and-technologies-pre
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Guillaume Girardin, Photonics, Sensing and Display Division Director at Yole Développement, delivers the presentation "2D and 3D Sensing: Markets, Applications, and Technologies" at the Embedded Vision Alliance's September 2019 Vision Industry and Technology Forum. Girardin details the optical depth sensor market and application trends.
Sensors and Data Management for Autonomous Vehicles report 2015 by Yole Devel...Yole Developpement
Multiple sensing technologies will ensure many market opportunities for Tier 1 players, Tier 2 players, and newcomers alike
Sensor technologies are a driving force in making fully autonomous vehicles a reality. Automakers are racing to develop safe self-driving cars, but this race is a distance run more than a sprint, where multiple automation stages will imply multiple sensors. Ultrasonic sensors, radars, and multiple cameras systems are already embedded in high-end vehicles -- and within 10 years, they could also include long-range cameras, LIDAR, micro bolometer and accurate dead reckoning. These devices will work concurrently and each technology will support another to ensure codependency and avoid concerns. Even though sensors are only part of the puzzle, their market opportunities are promising.
Artificial Intelligence Computing for Consumer 2019 report by Yole Développem...Yole Developpement
While AI is a feature expected in smartphones, this fantastic technology has spread like wildfire to the smart home ecosystem and is profoundly impacting the semiconductor industry.
More information on https://www.i-micronews.com/products/artificial-intelligence-computing-for-consumer-2019/
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/09/3d-sensing-market-and-industry-update-a-presentation-from-the-yole-group/
Florian Domengie, Senior Technology and Market Analyst at Yole Intelligence (part of the Yole Group), presents the “3D Sensing: Market and Industry Update” tutorial at the May 2023 Embedded Vision Summit.
While the adoption of mobile 3D sensing has slowed in Android phones, the market has still been growing fast, thanks to Apple. Apple is continuing to adopt 3D cameras for iPhones in both front and rear. Along the way, Apple has updated face ID and simplified and shrunk 3D camera optical structures. Meanwhile, due to Android phone OEMs mostly choosing not to incorporate 3D cameras, sensor suppliers and integrators have had to work hard to open up other consumer markets.
In addition to consumer markets, the use of 3D sensing has been blossoming in markets such as the industrial market and the nascent automotive market, where 3D sensing is increasingly used for advanced driver assistance systems and driver monitoring systems. In this talk, Domengie provides an overview of the main application, market, industry and technology trends of the 3D sensing industry.
Radar and Wireless for Automotive: Market and Technology Trends 2019 report b...Yole Developpement
The radar and 5G/V2X markets will both grow – one through market pull, the other through prospective enablement.
More information on https://www.i-micronews.com/products/radar-and-v2x-for-automotive-technologies-and-market-trends-2019/
Status of the Radar Industry: Players, Applications and Technology Trends 2020Yole Developpement
Worth more than $20B in 2019, the radar industry is experiencing a major transformation prior to entering the commercial era.
Learn more about the report here: https://www.i-micronews.com/products/status-of-the-radar-industry-players-applications-and-technology-trends-2020/
3D Imaging & Sensing 2018 Reports by Yole DeveloppementYole Developpement
The iPhone X initiated a trend. What happens next?
More information here: https://www.i-micronews.com/category-listing/product/p3d-imaging-sensing-2018.html
Technology, Business and Regulation of the Connected Carmentoresd
These slides were presented by Alison Chaiken of Mentor Graphics Embedded Software and John Kenney of Toyota at a Google+ On-Air Hangout. The Hangout can be viewed here: https://plus.google.com/u/1/b/112038386121410654017/events/ck73dq6nkp8guflfp9aqbbe7kog
System-in-Package Technology and Market Trends 2021 - SampleYole Developpement
Through enabling design and supply chain agility, SiP will reach $19B by 2026, with IDMs, OSATs, and foundries taking advantage of it.
More information : https://www.i-micronews.com/products/system-in-package-technology-and-market-trends-2021/
System-in-Package Technology and Market Trends 2020 report by Yole DéveloppementYole Developpement
How is System-in-Package capably meeting the stringent requirements of consumer applications?
More info here: https://www.i-micronews.com/products/system-in-package-technology-and-market-trends-2020/
The Autonomous Revolution of Vehicles & Transportation 6/12/19Mark Goldstein
I delivered an updated and expanded version of "The Autonomous Revolution of Vehicles and Transportation" to the IEEE Computer Society Phoenix (http://ewh.ieee.org/r6/phoenix/compsociety/) on 6/12/19 at DeVry University in Phoenix, Arizona.
It’s a detailed overview of the transformation of transportation through autonomous vehicles and the advent of Mobility-as-a-Service (MaaS) including enabling sensor and communication technologies as well as why Arizona is a hot bed for development and deployment plus a forward-looking view of markets and opportunities.
For the first time, the processor monitor is including FPGA, CPU, GPU, and APU including all the IDMs, fabless companies, and foundries in the business.
More information : https://www.i-micronews.com/products/application-processor-quarterly-market-monitor/
Status of the CMOS Image Sensor Industry 2017 - Report by Yole DeveloppementYole Developpement
New applications are transforming the market and technology playing field for CMOS image sensors
It’s ten years down the line from the initial Apple iPhone that started the smartphone era. Since then, CMOS imaging has benefited from huge market demand and a technology-driven environment, resulting in an $11.6B industry in 2016. Photography and video is the main application, which is totally transformed by new use cases, new devices and new technologies.
The mobile market is key for the CMOS image sensor (CIS) industry. Despite saturation in the number of handsets, the CIS market has been able to maintain a 10.5% compound annual growth rate (CAGR) for the 2016-2022 period due to the introduction of dual and 3D cameras. These additional cameras are changing the industry’s drivers from form factor and image quality to interactivity.
Penetration into higher added value markets such as automotive, security and medical shows that CIS products are transforming use cases across the board. CIS technology adoption allows greater automation levels at low cost, while using newly available computing architectures such as deep learning. The CMOS image sensor industry is currently in a virtuous circle where a new technology is providing true customer value.
VCSELs – Market and Technology Trends 2019 by Yole DéveloppementYole Developpement
New functionalities in smartphone and automotive are boosting the VCSEL market.
More information on https://www.i-micronews.com/products/vcsels-market-and-technology-trends-2019/
LiDARs for Automotive and Industrial Applications 2018 Report by Yole Develop...Yole Developpement
Will automotive change the LiDAR market?
More information on that report at https://www.i-micronews.com/report/product/lidars-for-automotive-and-industrial-applications-2018.html
Status of the MEMS Industry 2018 Market and technology Report by Yole Dévelop...Yole Developpement
Megatrends are invigorating the MEMS industry.
More information on : https://www.i-micronews.com/category-listing/product/status-of-the-mems-industry-2018.html
For the first time in its history, the automotive industry must face new industrial and technological
challenges while undergoing dramatic changes in its value chain.
More information: https://www.i-micronews.com/products/automotive-semiconductor-trends-2021/
Introducing new Cellular V2X technologies, designed to connect vehicles to each other (V2V), to pedestrians (V2P), to roadway infrastructure (V2I), to the network (V2N) — to basically everything (V2X).
Connected & Autonomous vehicles: cybersecurity on a grand scale v1Bill Harpley
A presentation which was given at 'How the Internet of Things is Changing Cyber Security - an event organised by Optimise Hub (Portsmouth University) on January 26th 2017 at Havant.
- This talk describes the issues relating to cybersecurity of Connected Cars and Autonomous Vehicles. It begins with an introduction to technology and standards. It then looks at the key security challenges and asks how prepared we are to deal with the future risks.
- It is a perfect case study in the challenge of achieving cybersecurity on a massive scale.
Future of Autonomous Vehicles
The dream of self-driving vehicles has been a long time coming. It is however now within reach and the pressure is on the deliver on the vision. With sustained technology development, increased investment and raising public awareness, there is enormous interest in the imminent mainstream use of autonomous vehicles on the streets.
Although approaches vary from around the world, policy makers and urban planners in leading locations are now seeking to collaborate more with manufacturers, mobility providers, tech suppliers, logistics operators in order to align regulation for testing and mass deployment. And it goes both ways.
The investments being made in autonomy have rapidly shifted from millions to billions, so unsurprisingly those public and private organisations that are providing the funds are keen to ensure that the ROI is credible. There is much to play for and, although there has been substantial progress over recent years, significant questions on safety, social impact, business models and performance are still unanswered.
The Future of Autonomous Vehicles project was undertaken to canvas the views of a wide range of experts from around the world in order to create a clearer, informed global perspective of how autonomy will evolve over the next decade. Beginning with a discussion with government officials just outside Shanghai in July 2018 and ending with leaders from across the US autonomous vehicle community in the hills above Silicon Valley in February of 2020, this project has covered a lot of ground. In all, eight workshops and six additional discussions have engaged with hundreds of different opinions, shared perspectives and built considered future pathways.
This report is a synthesis of many voices and opinions on the likely future of autonomous vehicles. We hope that is useful.
Full project details are available on the dedicated mini site www.futureautonomous.org
Imaging Technologies for Automotive 2016 Report by Yole Developpement Yole Developpement
Imaging technology, which is currently mainly cameras, is exploding into the automotive space, and is set to grow at 20% CAGR to reach $7.3B in 2021
INFOTAINMENT AND ADVANCED DRIVER ASSISTANCE SYSTEMS (ADAS) PROPEL AUTOMOTIVE IMAGING
Since 2008, when a recession acted as a wakeup call to the whole industry, the automotive market has undergone obvious structural change. Capitalizing on technologies initially developed for smartphones, electronics have invaded, and imaging technology is now taking center stage. From less than one camera per car on average in 2015, there will be more than three cameras per car by 2021, which means 371 million automotive imaging devices.
Cameras were initially mounted for ADAS purposes on high-end vehicles, with deep learning image analysis techniques promoting early adoption. The Israeli company Mobileye has been instrumental in bringing this technology to market, along with On Semiconductor, which provided the CMOS image sensor. Copycat competition will probably pick up as the market now justifies initial investment in design and technology. It is now a well-established fact that vision-based autonomous emergency braking (AEB) is possible and saves life. Adoption of forward ADAS cameras will therefore accelerate.
Growth of imaging for automotive is also being fueled by the park assist application, and 360° surround view camera volume is skyrocketing. While it’s becoming mandatory in the US to have a rearview camera, that uptake is dwarfed by 360° surround view cameras, which enable a “bird’s eye view” perspective. This trend is most beneficial to companies like Omnivision at sensor level and Panasonic and Valeo, which have become the main manufacturers of automotive cameras.
Mirror replacement cameras are currently the big unknown and take-off will primarily depend on its appeal and car design regulation. Europe and Japan are at the forefront of this trend, which should become slightly significant by 2021.
Solid state lidar is well talked about and will start to be found in high end cars by 2021. Cost reduction will be a key driver as the push for semi-autonomous driving will be felt more strongly by car manufacturers. The report will analyse the impact of lidar for automotive vision in detail.
Night vision cameras using Long Wave Infrared (LWIR) technology were initially perceived as a status symbol. However, they’re increasingly appreciated for their ability to automatically detect pedestrians and wildlife. LWIR will therefore become integrated into ADAS systems in future.
3D cameras will be limited to in-cabin infotainment and driver monitoring. This technology will be key for luxury cars and therefore is of limited use today.
If any significant semi-autonomous trend picks up, the technology will probably become mandatory, due to safety issues.
More information on that report at http://www.i-micronews.com/reports.html
Fan-Out Packaging: Technologies and Market Trends 2019 report by Yole Dévelop...Yole Developpement
Samsung and PTI, with panel-level packaging, have entered the Fan-Out battlefield.
More information on that report at : https://www.i-micronews.com/report/product/fan-out-packaging-technologies-and-market-trends-2019.htm
Image Signal Processor and Vision Processor Market and Technology Trends 2019...Yole Developpement
Artificial intelligence-powered newcomers are reshuffling the pack.
More information on https://www.i-micronews.com/products/image-signal-processor-and-vision-processor-market-and-technology-trends-2019/
Vertex perspectives ai optimized chipsets (part i)Yanai Oron
Businesses are increasingly adopting AI to create new applications to transform existing operations, driving big data with the growth of IoT and 5G networks and increasing future process complexities for human operators. In this new environment, AI will be needed to write algorithms dynamically to automate the entire programming process. Fortunately, algorithms associated with deep learning are able to achieve enhanced performance with increasing data, unlike the rest associated with machine learning.
Status of the Radar Industry: Players, Applications and Technology Trends 2020Yole Developpement
Worth more than $20B in 2019, the radar industry is experiencing a major transformation prior to entering the commercial era.
Learn more about the report here: https://www.i-micronews.com/products/status-of-the-radar-industry-players-applications-and-technology-trends-2020/
3D Imaging & Sensing 2018 Reports by Yole DeveloppementYole Developpement
The iPhone X initiated a trend. What happens next?
More information here: https://www.i-micronews.com/category-listing/product/p3d-imaging-sensing-2018.html
Technology, Business and Regulation of the Connected Carmentoresd
These slides were presented by Alison Chaiken of Mentor Graphics Embedded Software and John Kenney of Toyota at a Google+ On-Air Hangout. The Hangout can be viewed here: https://plus.google.com/u/1/b/112038386121410654017/events/ck73dq6nkp8guflfp9aqbbe7kog
System-in-Package Technology and Market Trends 2021 - SampleYole Developpement
Through enabling design and supply chain agility, SiP will reach $19B by 2026, with IDMs, OSATs, and foundries taking advantage of it.
More information : https://www.i-micronews.com/products/system-in-package-technology-and-market-trends-2021/
System-in-Package Technology and Market Trends 2020 report by Yole DéveloppementYole Developpement
How is System-in-Package capably meeting the stringent requirements of consumer applications?
More info here: https://www.i-micronews.com/products/system-in-package-technology-and-market-trends-2020/
The Autonomous Revolution of Vehicles & Transportation 6/12/19Mark Goldstein
I delivered an updated and expanded version of "The Autonomous Revolution of Vehicles and Transportation" to the IEEE Computer Society Phoenix (http://ewh.ieee.org/r6/phoenix/compsociety/) on 6/12/19 at DeVry University in Phoenix, Arizona.
It’s a detailed overview of the transformation of transportation through autonomous vehicles and the advent of Mobility-as-a-Service (MaaS) including enabling sensor and communication technologies as well as why Arizona is a hot bed for development and deployment plus a forward-looking view of markets and opportunities.
For the first time, the processor monitor is including FPGA, CPU, GPU, and APU including all the IDMs, fabless companies, and foundries in the business.
More information : https://www.i-micronews.com/products/application-processor-quarterly-market-monitor/
Status of the CMOS Image Sensor Industry 2017 - Report by Yole DeveloppementYole Developpement
New applications are transforming the market and technology playing field for CMOS image sensors
It’s ten years down the line from the initial Apple iPhone that started the smartphone era. Since then, CMOS imaging has benefited from huge market demand and a technology-driven environment, resulting in an $11.6B industry in 2016. Photography and video is the main application, which is totally transformed by new use cases, new devices and new technologies.
The mobile market is key for the CMOS image sensor (CIS) industry. Despite saturation in the number of handsets, the CIS market has been able to maintain a 10.5% compound annual growth rate (CAGR) for the 2016-2022 period due to the introduction of dual and 3D cameras. These additional cameras are changing the industry’s drivers from form factor and image quality to interactivity.
Penetration into higher added value markets such as automotive, security and medical shows that CIS products are transforming use cases across the board. CIS technology adoption allows greater automation levels at low cost, while using newly available computing architectures such as deep learning. The CMOS image sensor industry is currently in a virtuous circle where a new technology is providing true customer value.
VCSELs – Market and Technology Trends 2019 by Yole DéveloppementYole Developpement
New functionalities in smartphone and automotive are boosting the VCSEL market.
More information on https://www.i-micronews.com/products/vcsels-market-and-technology-trends-2019/
LiDARs for Automotive and Industrial Applications 2018 Report by Yole Develop...Yole Developpement
Will automotive change the LiDAR market?
More information on that report at https://www.i-micronews.com/report/product/lidars-for-automotive-and-industrial-applications-2018.html
Status of the MEMS Industry 2018 Market and technology Report by Yole Dévelop...Yole Developpement
Megatrends are invigorating the MEMS industry.
More information on : https://www.i-micronews.com/category-listing/product/status-of-the-mems-industry-2018.html
For the first time in its history, the automotive industry must face new industrial and technological
challenges while undergoing dramatic changes in its value chain.
More information: https://www.i-micronews.com/products/automotive-semiconductor-trends-2021/
Introducing new Cellular V2X technologies, designed to connect vehicles to each other (V2V), to pedestrians (V2P), to roadway infrastructure (V2I), to the network (V2N) — to basically everything (V2X).
Connected & Autonomous vehicles: cybersecurity on a grand scale v1Bill Harpley
A presentation which was given at 'How the Internet of Things is Changing Cyber Security - an event organised by Optimise Hub (Portsmouth University) on January 26th 2017 at Havant.
- This talk describes the issues relating to cybersecurity of Connected Cars and Autonomous Vehicles. It begins with an introduction to technology and standards. It then looks at the key security challenges and asks how prepared we are to deal with the future risks.
- It is a perfect case study in the challenge of achieving cybersecurity on a massive scale.
Future of Autonomous Vehicles
The dream of self-driving vehicles has been a long time coming. It is however now within reach and the pressure is on the deliver on the vision. With sustained technology development, increased investment and raising public awareness, there is enormous interest in the imminent mainstream use of autonomous vehicles on the streets.
Although approaches vary from around the world, policy makers and urban planners in leading locations are now seeking to collaborate more with manufacturers, mobility providers, tech suppliers, logistics operators in order to align regulation for testing and mass deployment. And it goes both ways.
The investments being made in autonomy have rapidly shifted from millions to billions, so unsurprisingly those public and private organisations that are providing the funds are keen to ensure that the ROI is credible. There is much to play for and, although there has been substantial progress over recent years, significant questions on safety, social impact, business models and performance are still unanswered.
The Future of Autonomous Vehicles project was undertaken to canvas the views of a wide range of experts from around the world in order to create a clearer, informed global perspective of how autonomy will evolve over the next decade. Beginning with a discussion with government officials just outside Shanghai in July 2018 and ending with leaders from across the US autonomous vehicle community in the hills above Silicon Valley in February of 2020, this project has covered a lot of ground. In all, eight workshops and six additional discussions have engaged with hundreds of different opinions, shared perspectives and built considered future pathways.
This report is a synthesis of many voices and opinions on the likely future of autonomous vehicles. We hope that is useful.
Full project details are available on the dedicated mini site www.futureautonomous.org
Imaging Technologies for Automotive 2016 Report by Yole Developpement Yole Developpement
Imaging technology, which is currently mainly cameras, is exploding into the automotive space, and is set to grow at 20% CAGR to reach $7.3B in 2021
INFOTAINMENT AND ADVANCED DRIVER ASSISTANCE SYSTEMS (ADAS) PROPEL AUTOMOTIVE IMAGING
Since 2008, when a recession acted as a wakeup call to the whole industry, the automotive market has undergone obvious structural change. Capitalizing on technologies initially developed for smartphones, electronics have invaded, and imaging technology is now taking center stage. From less than one camera per car on average in 2015, there will be more than three cameras per car by 2021, which means 371 million automotive imaging devices.
Cameras were initially mounted for ADAS purposes on high-end vehicles, with deep learning image analysis techniques promoting early adoption. The Israeli company Mobileye has been instrumental in bringing this technology to market, along with On Semiconductor, which provided the CMOS image sensor. Copycat competition will probably pick up as the market now justifies initial investment in design and technology. It is now a well-established fact that vision-based autonomous emergency braking (AEB) is possible and saves life. Adoption of forward ADAS cameras will therefore accelerate.
Growth of imaging for automotive is also being fueled by the park assist application, and 360° surround view camera volume is skyrocketing. While it’s becoming mandatory in the US to have a rearview camera, that uptake is dwarfed by 360° surround view cameras, which enable a “bird’s eye view” perspective. This trend is most beneficial to companies like Omnivision at sensor level and Panasonic and Valeo, which have become the main manufacturers of automotive cameras.
Mirror replacement cameras are currently the big unknown and take-off will primarily depend on its appeal and car design regulation. Europe and Japan are at the forefront of this trend, which should become slightly significant by 2021.
Solid state lidar is well talked about and will start to be found in high end cars by 2021. Cost reduction will be a key driver as the push for semi-autonomous driving will be felt more strongly by car manufacturers. The report will analyse the impact of lidar for automotive vision in detail.
Night vision cameras using Long Wave Infrared (LWIR) technology were initially perceived as a status symbol. However, they’re increasingly appreciated for their ability to automatically detect pedestrians and wildlife. LWIR will therefore become integrated into ADAS systems in future.
3D cameras will be limited to in-cabin infotainment and driver monitoring. This technology will be key for luxury cars and therefore is of limited use today.
If any significant semi-autonomous trend picks up, the technology will probably become mandatory, due to safety issues.
More information on that report at http://www.i-micronews.com/reports.html
Fan-Out Packaging: Technologies and Market Trends 2019 report by Yole Dévelop...Yole Developpement
Samsung and PTI, with panel-level packaging, have entered the Fan-Out battlefield.
More information on that report at : https://www.i-micronews.com/report/product/fan-out-packaging-technologies-and-market-trends-2019.htm
Image Signal Processor and Vision Processor Market and Technology Trends 2019...Yole Developpement
Artificial intelligence-powered newcomers are reshuffling the pack.
More information on https://www.i-micronews.com/products/image-signal-processor-and-vision-processor-market-and-technology-trends-2019/
Vertex perspectives ai optimized chipsets (part i)Yanai Oron
Businesses are increasingly adopting AI to create new applications to transform existing operations, driving big data with the growth of IoT and 5G networks and increasing future process complexities for human operators. In this new environment, AI will be needed to write algorithms dynamically to automate the entire programming process. Fortunately, algorithms associated with deep learning are able to achieve enhanced performance with increasing data, unlike the rest associated with machine learning.
Vertex Perspectives | AI-optimized Chipsets | Part IVertex Holdings
Businesses are increasingly adopting AI to create new applications to transform existing operations, driving big data with the growth of IoT and 5G networks and increasing future process complexities for human operators. In this new environment, AI will be needed to write algorithms dynamically to automate the entire programming process. Fortunately, algorithms associated with deep learning are able to achieve enhanced performance with increasing data, unlike the rest associated with machine learning. To date, deep learning technology has primarily been a software play. Existing processors were not originally designed for these new applications. Hence the need to develop AI-optimized hardware.
This paper proposes smart monitoring of automobiles using IoT, which has the same functionality of conventional scanner-automobile diagnostic device. It consists of a Raspberry pi, Arduino Uno board, Web page for the service centre and also various sensors. The sensors attached in the car are connected with the Arduino board and the output is given to the raspberry pi and the Ethernet field uploads these readings to the server. If any variation in the readings, the server will send SMS to the users mobile to inform about the particular condition. And also it is possible to check the current status of the vehicle and there is special facility called emergency request that is requested by the user to inform about the accident or sudden breakdown to the service centre. It also has an obstacle sensor to sense any obstacles within a particular distance. Dust sensor fixed inside the car monitors the dust content, which can cause health problems to passengers. If there occurs any such scenarios, an SMS will be sent to the user. The vehicle will not get started if the seat belt is not worn by the driver. Detection of fire or water can result to automatic unlocking of the seat belts.
IEI's integrated factory solution improves the production efficiency and warehouse management accuracy. To catch the wave of automatic assembly, robot system will be a major role along with the machine vision and motion control solutions. For factory automation control terminals, IEI offers industrial computing solutions with robust IP65 design, wide temperature, and flexible add-on card expansion. To elevate the efficiency of warehouse management, IEI provides UHF RFID and 1D/2D barcode reader solutions with various form factors.
Xpeng Motors' P7's self-driving roadmap and system design Junli Gu
Public presentation for nvidia gtc 2019. You can also refer to the public recorded video at: https://on-demand.gputechconf.com/gtc/2019/video/_/S91049/
Securing future connected vehicles and infrastructureAlan Tatourian
Slides from a keynote I gave at AZ Infragard. Since this was a keynote, I tried to dazzle the audience by talking more about technology and portraying security only as part of the underlying architecture of cognitive autonomous systems.
Intland Software | codeBeamer ALM: What’s in the Pipeline for the Automotive ...Intland Software GmbH
This talk was presented by Andreas Pabinger and Benjamin Engele (Intland Software) at Intland Connect: Annual User Conference 2020 on 22 Oct 2020. To learn more, visit: https://intland.com/intland-connect-annual-user-conference-2020/
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-tschudi
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Yohann Tschudi, Technology and Market Analyst at Yole Développement, presents the "AI Is Moving to the Edge—What’s the Impact on the Semiconductor Industry?" tutorial at the May 2019 Embedded Vision Summit.
Artificial intelligence is proliferating into numerous edge applications and disrupting numerous industries. Clearly this represents a huge opportunity for technology suppliers. But it can be difficult to discern exactly what form this opportunity will take. For example, will edge devices perform AI computation locally, or in the cloud? Will edge devices use separate chips for AI, or will AI processing engines be incorporated into the main processor SoCs already used in these devices?
In this talk, Tschudi answers these questions by presenting and explaining his firm's market data and forecasts for AI processors in mobile phones, drones, smart home devices and personal robots. He explains why there is a strong trend towards executing AI computation at the edge, and quantifies the opportunity for separate processor chips and on-chip accelerators that address visual and audio AI tasks.
James Goel, MIPI Technical Steering Group chair, shares a state-of-the-art MASS (MIPI Automotive SerDes Solutions) display architecture that leverages the latest MIPI DSI-2℠ protocols using VDC-M visually lossless compression algorithms to optimize pixel bandwidth within tightly constrained display systems.
Overcoming the AIoT Obstacles through Smart Component IntegrationInnodisk Corporation
Enterprises in every industry are gearing up for AI’s integration with IoT at the edge. Analytics and cloud-based applications are crucial foundations for the AIoT infrastructure. But even more importantly, AIoT requires complete, real-time access to the data in fulfill the needs of highly responsive edge computing applications.
In our experience, many customers are facing the same difficulties with regards to cyber level and physical level device integration in the new AI era. As the world's leading industrial storage and memory provider, Innodisk has a solid track record with more than 2000 customers, and expertise built on more than a decade of integration of hardware, firmware and software solutions.
Attend this webinar to learn about:
- Preparing your business for the new Internet of Things (IoT) an AI era
- How do we Overcome the Current Architectural Issues?
- Increasing process efficiency and delivering a better customer experience
- Facilitating new platforms that enable rapid development of next generation intelligent IoT systems
- Trends and technology in AIoT intelligent storage/ data optimization
Freescale i.mx golden presentation for blogger july 2011Dylan Ko
Similar to “Market Analysis on SoCs for Imaging, Vision and Deep Learning in Automotive and Mobile Markets,” a Presentation from Yole Développement (20)
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/deploying-large-models-on-the-edge-success-stories-and-challenges-a-presentation-from-qualcomm/
Vinesh Sukumar, Senior Director of Product Management at Qualcomm Technologies, presents the “Deploying Large Models on the Edge: Success Stories and Challenges” tutorial at the May 2024 Embedded Vision Summit.
In this talk, Dr. Sukumar explains and demonstrates how Qualcomm has been successful in deploying large generative AI and multimodal models on the edge for a variety of use cases in consumer and enterprise markets. He examines key challenges that must be overcome before large models at the edge can reach their full commercial potential. He also highlights how Qualcomm is addressing these challenges through upgraded processor hardware, improved developer tools and a comprehensive library of fully optimized AI models in the Qualcomm AI Hub.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/scaling-vision-based-edge-ai-solutions-from-prototype-to-global-deployment-a-presentation-from-network-optix/
Maurits Kaptein, Chief Data Scientist at Network Optix and Professor at the University of Eindhoven, presents the “Scaling Vision-based Edge AI Solutions: From Prototype to Global Deployment” tutorial at the May 2024 Embedded Vision Summit.
The Embedded Vision Summit brings together innovators in silicon, devices, software and applications and empowers them to bring computer vision and perceptual AI into reliable and scalable products. However, integrating recent hardware, software and algorithm innovations into prime-time-ready products is quite challenging. Scaling from a proof of concept—for example, a novel neural network architecture performing a valuable task efficiently on a new piece of silicon—to an AI vision system installed in hundreds of sites requires surmounting myriad hurdles.
First, building on Network Optix’s 14 years of experience, Professor Kaptein details how to overcome the networking, fleet management, visualization and monetization challenges that come with scaling a global vision solution. Second, Kaptein discusses the complexities of making vision AI solutions device-agnostic and remotely manageable, proposing an open standard for AI model deployment to edge devices. The proposed standard aims to simplify market entry for silicon manufacturers and enhance scalability for solution developers. Kaptein outlines the standard’s core components and invites collaborative contributions to drive market expansion.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/whats-next-in-on-device-generative-ai-a-presentation-from-qualcomm/
Jilei Hou, Vice President of Engineering and Head of AI Research at Qualcomm Technologies, presents the “What’s Next in On-device Generative AI” tutorial at the May 2024 Embedded Vision Summit.
The generative AI era has begun! Large multimodal models are bringing the power of language understanding to machine perception, and transformer models are expanding to allow machines to understand using multiple types of sensors. This new wave of approaches is poised to revolutionize user experiences, disrupt industries and enable powerful new capabilities. For generative AI to reach its full potential, however, we must deploy it on edge devices, providing improved latency, pervasive interaction and enhanced privacy.
In this talk, Hou shares Qualcomm’s vision of the compelling opportunities enabled by efficient generative AI at the edge. He also identifies the key challenges that the industry must overcome to realize the massive potential of these technologies. And he highlights research and product development work that Qualcomm is doing to lead the way via an end-to-end system approach—including techniques for efficient on-device execution of LLMs, LVMs and LMMs, methods for orchestration of large models at the edge and approaches for adaptation and personalization.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/learning-compact-dnn-models-for-embedded-vision-a-presentation-from-the-university-of-maryland-at-college-park/
Shuvra Bhattacharyya, Professor at the University of Maryland at College Park, presents the “Learning Compact DNN Models for Embedded Vision” tutorial at the May 2023 Embedded Vision Summit.
In this talk, Bhattacharyya explores methods to transform large deep neural network (DNN) models into effective compact models. The transformation process that he focuses on—from large to compact DNN form—is referred to as pruning. Pruning involves the removal of neurons or parameters from a neural network. When performed strategically, pruning can lead to significant reductions in computational complexity without significant degradation in accuracy. It is sometimes even possible to increase accuracy through pruning.
Pruning provides a general approach for facilitating real-time inference in resource-constrained embedded computer vision systems. Bhattacharyya provides an overview of important aspects to consider when applying or developing a DNN pruning method and presents details on a recently introduced pruning method called NeuroGRS. NeuroGRS considers structures and trained weights jointly throughout the pruning process and can result in significantly more compact models compared to other pruning methods.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/introduction-to-computer-vision-with-cnns-a-presentation-from-mohammad-haghighat/
Independent consultant Mohammad Haghighat presents the “Introduction to Computer Vision with Convolutional Neural Networks” tutorial at the May 2023 Embedded Vision Summit.
This presentation covers the basics of computer vision using convolutional neural networks. Haghighat begins by introducing some important conventional computer vision techniques and then transition to explaining the basics of machine learning and convolutional neural networks (CNNs) and showing how CNNs are used in visual perception.
Haghighat illustrates the building blocks and computational elements of neural networks through examples. This session provides an overview of how modern computer vision algorithms are designed, trained and used in real-world applications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/selecting-tools-for-developing-monitoring-and-maintaining-ml-models-a-presentation-from-yummly/
Parshad Patel, Data Scientist at Yummly, presents the “Selecting Tools for Developing, Monitoring and Maintaining ML Models” tutorial at the May 2023 Embedded Vision Summit.
With the boom in tools for developing, monitoring and maintaining ML models, data science teams have many options to choose from. Proprietary tools provided by cloud service providers are enticing, but teams may fear being locked in—and may worry that these tools are too costly or missing important features when compared with alternatives from specialized providers.
Fortunately, most proprietary, fee-based tools have an open-source component that can be integrated into a home-grown solution at low cost. This can be a good starting point, enabling teams to get started with modern tools without making big investments and leaving the door open to evolve tool selection over time. In this talk, Patel presents a step-by-step process for creating an MLOps tool set that enables you to deliver maximum value as a data scientist. He shares how Yummly built pipelines for model development and put them into production using open-source projects.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/building-accelerated-gstreamer-applications-for-video-and-audio-ai-a-presentation-from-wave-spectrum/
Abdo Babukr, Accelerated Computing Consultant at Wave Spectrum, presents the “Building Accelerated GStreamer Applications for Video and Audio AI,” tutorial at the May 2023 Embedded Vision Summit.
GStreamer is a popular open-source framework for creating streaming media applications. Developers often use GStreamer to streamline the development of computer vision and audio perception applications. Since perceptual algorithms are often quite demanding in terms of processing performance, in many cases developers need to find ways to accelerate key GStreamer building blocks, taking advantage of specialized features of their target processor or co-processor.
In this talk, Babukr introduces GStreamer and shows how to use it to build computer vision and audio perception applications. He also shows how to create efficient, high-performance GStreamer applications that utilize specialized hardware features.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/understanding-selecting-and-optimizing-object-detectors-for-edge-applications-a-presentation-from-walmart-global-tech/
Md Nasir Uddin Laskar, Staff Machine Learning Engineer at Walmart Global Tech, presents the “Understanding, Selecting and Optimizing Object Detectors for Edge Applications” tutorial at the May 2023 Embedded Vision Summit.
Object detectors count objects in a scene and determine their precise locations, while also labeling them. Object detection plays a crucial role in many vision applications, from autonomous driving to smart appliances. In many of these applications, it’s necessary or desirable to implement object detection at the edge.
In this presentation, Laskar explores the evolution of object detection algorithms, from traditional approaches to deep learning-based methods and transformer-based architectures. He delves into widely used approaches for object detection, such as two-stage R-CNNs and one-stage YOLO algorithms, and examines their strengths and weaknesses. And he provides guidance on how to evaluate and select an object detector for an edge application.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/introduction-to-modern-lidar-for-machine-perception-a-presentation-from-the-university-of-ottawa/
Robert Laganière, Professor at the University of Ottawa and CEO of Sensor Cortek, presents the “Introduction to Modern LiDAR for Machine Perception” tutorial at the May 2023 Embedded Vision Summit.
In this presentation, Laganière provides an introduction to light detection and ranging (LiDAR) technology. He explains how LiDAR sensors work and their main advantages and disadvantages. He also introduces different approaches to LiDAR, including scanning and flash LiDAR.
Laganière explores the types of data produced by LiDAR sensors and explains how this data can be processed using deep neural networks. He also examines the synergy between LiDAR and cameras, and the concept of pseudo-LiDAR for detection.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/vision-language-representations-for-robotics-a-presentation-from-the-university-of-pennsylvania/
Dinesh Jayaraman, Assistant Professor at the University of Pennsylvania, presents the “Vision-language Representations for Robotics” tutorial at the May 2023 Embedded Vision Summit.
In what format can an AI system best present what it “sees” in a visual scene to help robots accomplish tasks? This question has been a long-standing challenge for computer scientists and robotics engineers. In this presentation, Jayaraman provides insights into cutting-edge techniques being used to help robots better understand their surroundings, learn new skills with minimal guidance and become more capable of performing complex tasks.
Jayaraman discusses recent advances in unsupervised representation learning and explains how these approaches can be used to build visual representations that are appropriate for a controller that decides how the robot should act. In particular, he presents insights from his research group’s recent work on how to represent the constituent objects and entities in a visual scene, and how to combine vision and language in a way that permits effectively translating language-based task descriptions into images depicting the robot’s goals.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/adas-and-av-sensors-whats-winning-and-why-a-presentation-from-techinsights/
Ian Riches, Vice President of the Global Automotive Practice at TechInsights, presents the “ADAS and AV Sensors: What’s Winning and Why?” tutorial at the May 2023 Embedded Vision Summit.
It’s clear that the number of sensors per vehicle—and the sophistication of these sensors—is growing rapidly, largely thanks to increased adoption of advanced safety and driver assistance features. In this presentation, Riches explores likely future demand for automotive radars, cameras and LiDARs.
Riches examines which vehicle features will drive demand out to 2030, how vehicle architecture change is impacting the market and what sorts of compute platforms these sensors will be connected to. Finally, he shares his firm’s vision of what the landscape could look like far beyond 2030, considering scenarios out to 2050 for automated driving and the resulting sensor demand.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/computer-vision-in-sports-scalable-solutions-for-downmarkets-a-presentation-from-sportlogiq/
Mehrsan Javan, Co-founder and CTO of Sportlogiq, presents the “Computer Vision in Sports: Scalable Solutions for Downmarket Leagues” tutorial at the May 2023 Embedded Vision Summit.
Sports analytics is about observing, understanding and describing the game in an intelligent manner. In practice, this requires a fully automated, robust end-to-end pipeline, spanning from visual input, to player and group activities, to player and team evaluation to planning. Despite major advancements in computer vision and machine learning, today sports analytics solutions are limited to top leagues and are not widely available for downmarket leagues and youth sports.
In this talk, Javan explains how his company has developed scalable and robust computer vision solutions to democratize sport analytics and offer pro-league-level insights to leagues with modest resources, including youth leagues. He highlights key challenges—such as the requirement for low-cost, low-latency processing and the need for robustness despite variations in venues. He discusses the approaches Sportlogiq tried and how it ultimately overcame these challenges, including the use of transformers and fusion of multiple type of data streams to maximize accuracy.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/detecting-data-drift-in-image-classification-neural-networks-a-presentation-from-southern-illinois-university/
Spyros Tragoudas, Professor and School Director at Southern Illinois University Carbondale, presents the “Detecting Data Drift in Image Classification Neural Networks” tutorial at the May 2023 Embedded Vision Summit.
An unforeseen change in the input data is called “drift,” and may impact the accuracy of machine learning models. In this talk, Tragoudas presents a novel scheme for diagnosing data drift in the input streams of image classification neural networks. His proposed method for drift detection and quantification uses a threshold dictionary for the prediction probabilities of each class in the neural network model.
The method is applicable to any drift type in images such as noise and weather effects, among others. Tragoudas shares experimental results on various data sets, drift types and neural network models to show that his proposed method estimates the drift magnitude with high accuracy, especially when the level of drift significantly impacts the model’s performance.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/deep-neural-network-training-diagnosing-problems-and-implementing-solutions-a-presentation-from-sensor-cortek/
Fahed Hassanat, Chief Operating Officer and Head of Engineering at Sensor Cortek, presents the “Deep Neural Network Training: Diagnosing Problems and Implementing Solutions” tutorial at the May 2023 Embedded Vision Summit.
In this presentation, Hassanat delves into some of the most common problems that arise when training deep neural networks. He provides a brief overview of essential training metrics, including accuracy, precision, false positives, false negatives and F1 score.
Hassanat then explores training challenges that arise from problems with hyperparameters, inappropriately sized models, inadequate models, poor-quality datasets, imbalances within training datasets and mismatches between training and testing datasets. To help detect and diagnose training problems, he also covers techniques such as understanding performance curves, recognizing overfitting and underfitting, analyzing confusion matrices and identifying class interaction issues.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/ai-start-ups-the-perils-of-fishing-for-whales-war-stories-from-the-entrepreneurial-front-lines-a-presentation-from-seechange-technologies/
Tim Hartley, Vice President of Product for SeeChange Technologies, presents the “AI Start-ups: The Perils of Fishing for Whales (War Stories from the Entrepreneurial Front Lines)” tutorial at the May 2023 Embedded Vision Summit.
You have a killer idea that will change the world. You’ve thought through product-market fit and differentiation. You have seed funding and a world-beating team. Best of all, you’ve caught the attention of major players in your industry. You’ve reached peak “start-up”—that point of limitless possibility—when you go to bed with the same level of energy and enthusiasm you had when you woke. And then the first proof of concept starts…
In this talk, Hartley lays out some of the pitfalls that await those building the next big thing. Using real examples, he shares some of the dos and don’ts, particularly when dealing with that big potential first customer. Hartley discusses the importance of end-to-end design, ensuring your product solves real-world problems. He explores how far the big companies will tell you to jump—and then jump again—for free. And, most importantly, how to build long-term partnerships with major corporations without relying on over-promising sales pitches.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/a-computer-vision-system-for-autonomous-satellite-maneuvering-a-presentation-from-scout-space/
Andrew Harris, Spacecraft Systems Engineer at SCOUT Space, presents the “Developing a Computer Vision System for Autonomous Satellite Maneuvering” tutorial at the May 2023 Embedded Vision Summit.
Computer vision systems for mobile autonomous machines experience a wide variety of real-world conditions and inputs that can be challenging to capture accurately in training datasets. Few autonomous systems experience more challenging conditions than those in orbit. In this talk, Harris describes how SCOUT Space has designed and trained satellite vision systems using dynamic and physically informed synthetic image datasets.
Harris describes how his company generates synthetic data for this challenging environment and how it leverages new real-world data to improve our datasets. In particular, he explains how these synthetic datasets account for and can replicate real sources of noise and error in the orbital environment, and how his company supplements them with in-space data from the first SCOUT-Vision system, which has been in orbit since 2021.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/bias-in-computer-vision-its-bigger-than-facial-recognition-a-presentation-from-santa-clara-university/
Susan Kennedy, Assistant Professor of Philosophy at Santa Clara University, presents the “Bias in Computer Vision—It’s Bigger Than Facial Recognition!” tutorial at the May 2023 Embedded Vision Summit.
As AI is increasingly integrated into various industries, concerns about its potential to reproduce or exacerbate bias have become widespread. While the use of AI holds the promise of reducing bias, it can also have unintended consequences, particularly in high-stakes computer vision applications such as facial recognition. However, even seemingly low-stakes computer vision applications such as identifying potholes and damaged roads can also present ethical challenges related to bias.
This talk explores how bias in computer vision often poses an ethical challenge, regardless of the stakes involved. Kennedy discusses the limitations of technical solutions aimed at mitigating bias, and why “bias-free” AI may not be achievable. Instead, she focuses on the importance of adopting a “bias-aware” approach to responsible AI design and explores strategies that can be employed to achieve this.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/sensor-fusion-techniques-for-accurate-perception-of-objects-in-the-environment-a-presentation-from-sanborn-map-company/
Baharak Soltanian, Vice President of Research and Development for the Sanborn Map Company, presents the “Sensor Fusion Techniques for Accurate Perception of Objects in the Environment” tutorial at the May 2023 Embedded Vision Summit.
Increasingly, perceptual AI is being used to enable devices and systems to obtain accurate estimates of object locations, speeds and trajectories. In demanding applications, this is often best done using a heterogeneous combination of sensors (e.g., vision, radar, LiDAR). In this talk, Soltanian introduces techniques for combining data from multiple sensors to obtain accurate information about objects in the environment.
Soltanian briefly introduces the roles played by Kalman filters, particle filters, Bayesian networks and neural networks in this type of fusion. She then examines alternative fusion architectures, such as centralized and decentralized approaches, to better understand the trade-offs associated with different approaches to sensor fusion as used to enhance the ability of machines to understand their environment.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/updating-the-edge-ml-development-process-a-presentation-from-samsara/
Jim Steele, Vice President of Embedded Software at Samsara, presents the “Updating the Edge ML Development Process” tutorial at the May 2023 Embedded Vision Summit.
Samsara (NYSE:IOT) is focused on digitizing the world of operations. The company helps customers across many industries—including food and beverage, utilities and energy, field services and government—get information about their physical operations into the cloud, so they can operate more safely, efficiently and sustainably. Samsara’s sensors collect billions of data points per day and on-device processing is instrumental to its success. The company is constantly developing, improving and deploying ML models at the edge.
Samsara has found that the traditional development process—where ML scientists create models and hand them off to firmware engineers for embedded implementation—is slow and often produces difficult-to-resolve differences between the original model and the embedded implementation. In this talk, Steele presents an alternative development process that his company has adopted with good results. In this process, firmware engineers develop a general framework that ML scientists use to design, develop and deploy their models. This enables quick iterations and fewer confounding bugs.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/combating-bias-in-production-computer-vision-systems-a-presentation-from-red-cell-partners/
Alex Thaman, Chief Architect at Red Cell Partners, presents the “Combating Bias in Production Computer Vision Systems” tutorial at the May 2023 Embedded Vision Summit.
Bias is a critical challenge in predictive and generative AI that involves images of humans. People have a variety of body shapes, skin tones and other features that can be challenging to represent completely in training data. Without attention to bias risks, ML systems have the potential to treat people unfairly, and even to make humans more likely to do so.
In this talk, Thaman examines the ways in which bias can arise in visual AI systems. He shares techniques for detecting bias and strategies for minimizing it in production AI systems.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
2. 22
Frame processing +
other sensors
Fusion platform
FROM IMAGE SIGNAL PROCESSOR TO FUSION PLATFORM
Vision processor
from Mobileye
Frame processing
Vision processor
• Amount of data processed
• Performance
• Consumption
Computer vision and AI algorithms
Price
per unit
> $1000
$10
< $1
Set of pixels processing
Image Signal Processor
Image processing
algorithms
Standalone ISP from Altek
Fusion platform from NVIDIA
Algorithms
complexity
$100
Sensing Processing Unit – ISP
stacked with CIS
Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
3. 33
THEVISION PROCESSOR, IMAGING-DEDICATED HARDWARE FOR AI
Two different
architectures
for vision
processor
Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
Standalone chip Unit(s) embedded in a SoC
NXP S32V234
Automotive
The chip is fully dedicated to
processing algorithms for imaging.
In a single system-on-chip (SoC), multiple
units combine to form the vision processor.
These units include ISP, CPU, memory, and
even a dedicated unit for inference
acceleration.
Algorithms for analyzing images are run in a
dedicated unit and can be assisted by other
units that form the SoC, i.e. GPU, CPU, and
memory.
The ISP can also be embedded as a unit.These algorithms are
generally computer vision algorithms. For AI, a dedicated unit
for inference acceleration can be found too in the SoC.
Qualcomm Snapdragon
Smartphone
4. 44Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
COMPUTING HARDWARE FOR AI SOLUTIONS LANDSCAPE
Ambarella CV2
ARM ML
Cadence DNA 100
Cadence Vision P6 DSP
Hailo Hailo-8 DL
Imagination PowerVR AI
Intel Mobileye EyeQ4
Intel Mobileye EyeQ5
Kalray MPPA 3
KortiQ AIScale
NVIDIA Xavier
NXP S32V234
Renesas Renesas R-Car H3
Synopsys DesignWare EV
Tesla FSD
Texas Instruments Jacinto TDA3
Toshiba Visconti 4
Xilinx Zynq Ultrascale+ series
Google TPUv2
Intel Nervana
Xilinx Virtex Ultrascale+
Canaan Kendryte K210
CEVA NeuPro
Google Coral Edge TPU
Greenwaves GAP8
Intel Movidius
Lattice iCE40
NVIDIA Jetson Nano
NVIDIA Jetson TX2
Rockchip RK3399Pro
STMicroelectronics STM32 series
Bitmain Sophon series
Gyrfalcon Lightspeeur series
Apple A12
HiSilicon Kirin 980
Mediatek Helio P65
Qualcomm
Snapdragon 855
Samsung Exynos 9820
0.01
0.1
1
10
100
1000
0.01 0.1 1 10 100
Consumption(W)
Performance (TOPS)
Edge computing
Battery-powered devices
Autonomous machines
ADAS vehicles
High Performance
Data center - Robotic vehicles
Mobile
Smartphones with
neural engines
Specific players target specific segments.
It is complicated for one player to
propose a product for each segment,
since performance and consumption
requirements are very different.
5. 5
Taiwan
USA
Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
IMAGING AI ON THE EDGE MAIN HARDWARE PLAYERS
Europe
China
Japan
Non-exhaustive list
6. Automotive
Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
8. 8
MARKET BREAKDOWN – ORDERS OF MAGNITUDE
2018 – Main automotive imaging applications
TOTAL
automotive
imaging
revenue is
$4.1B
Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
x1 $30 « ADAS » cameras
x1 $40Vision processing
x1 $30 « ADAS » cameras
x2 $15 « ADAS » cameras
x1 $70Vision processing
X1 Million systemsx20 Million systems
$900M
$300M
$60M
$70M
x1 $22.5 «Viewing » camera
x1 $7.5 ISP board
$945M
$315M
x42 Million systems
Cameras
Processing
x4 $22.5 « for display » cameras
x1 $30 ISP board or x4 $7.5 ISP
x10 Million systems
$600M
$800M
$30 Rear
view
Viewing
$70
Forward
ADAS
Sensing
TOTAL Automotive Imaging $4B
~$2,500M
~$1,500M
TOTAL Cameras module
TOTALVision processing
$140
Surround
view
Viewing
$130
Forward
ADAS
Sensing
9. 9
SENSOR MODULE ASP FOR EACH AUTOMATION LEVEL
A level-2+
car will
have $500
worth of
embedded
sensors for
AD
Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
9
16
18
22
28
$260
$405
$500
$1,906
$1,758
Level 1
Level 2
Level 2+
Level 3
Level 4/5
$BOM Sensor Module Count
x1 x4 Radar SRR
x1 In-cabin/Driver camera
x1 µbolometer
x1 x2 x4 LIDAR
x1 Dead reckoning
x1 Event-based camera
x1 Radar LRR
x1 x3 Forward camera
x4 Camera surroundBackup camera x1
x6 x8 Ultrasonic
Today
Tomorrow
10. 1010Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
VISION PROCESSORS IN AUTOMOTIVE
VP
Ultrasonic
Radar
Forward Camera
Surround Camera
Driver Camera
LiDAR
Fusion
Fusion
Functionalities
Level 1
ACC: Automatic Cruise Control
AEB: Advanced Emergency Braking
CTA: Cross Traffic Alert
TJA: Traffic Jam Assist
PA: Park Assist
LKA: Lane Keeping Assist
DM: Driver Monitoring
HP: Highway Pilot
AP: Auto Pilot
ACC Level 2
PALKA
ACC
TJA
Level 2+/3
PALKA
ACC
TJA
AEB DM TJA
Level 4
PALKA
ACC
TJA
AEB DM TJA
HP
Level 5
PALKA
ACC
TJA
AEB DM TJA
AP
MCU: Micro-Controller
FPGA: Field-Programmable Gate Array
VP: Vision Processor Unit
CPU: Central Processing Unit/Processor
Fusion
Fusion of camera inputs is made through a VP
(Mobileye EyeQ3) or a FPGA (Xilinx solutions) or
fusion platform (Renesas R-Car H3)
Fusion of camera, radar and LiDAR
inputs is made through a fusion
platform (like NVidia solutions) with
FPGA support for preprocessing
MCU FPGA Fusion platform
Fusion
Fusion of different types of’ inputs for Level 2+ and Level 3 through
VP (Mobileye EyeQ4/5), Renesas NextGen and Nvidia platform
Technology
penetration
11. 1111
EXAMPLE OF A FAMOUSVISION PROCESSOR: MOBILEYE EYEQ4
Description
of the units of
the Mobileye
EyeQ4
Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
Source: Mobileye EyeQ4 Processor Family – System Plus Consulting
28nm CMOS
2.5TOPs @ 3W
• EyeQ4-High and -Mid processors,
are found in the ZF S-Cam4 Tri-
cam and Mono-cam cameras
• They integrate multi-threaded
Microprocessor from MIPS.
• These cores are coupled with the
new generation of Mobileye's
Vector Micro-code Processors
(VMP), Multithreaded Processing
Cluster (MPC) cores and
Programmable Macro Array (PMA)
cores
• Ability to manage up to three
cameras at the same time.
12. 1212Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
AUTOMOTIVE ADAS PROCESSING PLATFORM – LEVEL 3?
Courtesy of Audi
ASIC SoC
GPU
FPGA
CPU
Audi zFAS of the
Audi A8
AD computing platforms are using the full spectrum of computing architecture
14. 1414
COMPUTING HARDWARE FOR AUTONOMOUS DRIVING
Ambarella CV2
Ambarella CV22
Hailo Hailo-8 DL
Intel Mobileye EyeQ3
Intel Mobileye EyeQ4
Intel Mobileye EyeQ5
Kalray Coolidge
NVIDIA Drive PX 2
NVIDIA Drive PX Xavier
NVIDIA Drive PX Pegasus
NVIDIA Drive PX Orin
NVIDIA Drive PX Orin x2
NXP S32V234
Qualcomm Snapdragon Ride
Qualcomm Snapdragon Ride
Accelerators x2
Renesas R-Car H3
Tesla FSD
TI Jacinto TDA3
Toshiba Visconti 4
Xilinx Zynq Ultrascale+ EV
1
10
100
1000
0.1 1 10 100 1000
Log Scale
Performance (TOPS)
Log Scale
Power dissipation (W)
Level 1-2
Level 2+
Level 3
Level 2++
Robotic vehicles
are using chips in
>100W range
ADAS computing
is using chips in
the 2W to 20W
range
1Petaflop
Next battleground
for the ADAS industry
SiP
The use of accelerators
in SoC or as
coprocessors allow to
increase performance
faster than consumption
Level 4-5
5 years 5 years 5 years
~100Tops/W~10Tops/W~1Tops/W~0.1Tops/W
Robotic
ADAS computing race :
higher performance for
minimum consumption
15. 15
ECOSYSTEMS FOR AUTONOMOUS DRIVING
Key points
• Because the technologies are different, ecosystems and supply chains for ADAS and robotic cars are different;
• In both of these ecosystems, the supply chains are organizing;
• ADAS ecosystems are built around historical automotive OEMs, though with classical supply chains going less and
less throughTier-1
• Robotic vehicles ecosystems are built around full stack solution partnerships such as proposed by NVidia or Apollo
and are not exclusive to each other
• Because the path to full autonomy through robotic cars is tough, a lot of companies have made the choice to
be part of these shared and open ecosystems for software (AI, simulations, mapping,…) and hardware
(sensors, computing, shuttle/robotaxi)
Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
Hardware for ADAS is
lead by Mobileye but
competition is tough
OEM
Tier-1
2 main ecosystems with NVidia leading the
computing hardware ecosystem thanks to their
product quality and open software stacks
Apollo ecosystem is huge and
very promising with a clear and
precise roadmap that, however,
seems a bit optimistic
17. 1717Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
EVOLUTION OF SMARTPHONE TOWARDS AI
2000 2001 2007 2008 2013 2017 2018
2000
Nokia 3310
Texas Instruments
MDA2WDIBasic tasks
Snake
Graphic
applications
1 year 6 years 5 years 4 years1 year 1 year
2000
SharpJ SH04
1st photo
phone
One of the first photos
taken by a phone
2001
Siemens SL45
1st mp3
phone
Embedded
AI
Applications
2007
iPhone
1st
touchscreen
phone
Touchscreen in the
heart of utilization
Samsung
ARM1176JZ(F)-S
V1.0
CPU, GPU &
memory in a
single chip
2008
HTC Tattoo
Snapdragon S1
MSM7225
Nice notification
display
2013
Galaxy S4
Samsung
Exynos 5
High-resolution games
More & more functions integrated:
DSP, connectivity, VPU, ISP
2017
iPhone X
Apple
A11Bionic
Integration of AI
Facial ID
Biometry
Apple
A12Bionic2018
iPhone XS
HiSilicon
Kirin 9802018
Huawei Mate 20 Pro
AR/VR
CPU CPU & GPU “1st APU”, as we
call it today
Neural engine
Several distinct
components
Progressive SoCs
appear
SoCs
90 nm
65 nm
28 nm
10 nm
7 nm
Node size
20191 year
Apple
A13Bionic
Photography
180 nm
18. 1818
• Since the advent of application processors for
mobile, the ISP has been embedded as a
dedicated unit to treat data from the camera;
• Some players like Sony want/try to stack the
CMOS sensors with the ISP, however it is not
something with a high value added and as
cameras are more and more numerous and
with more data to handle, it is easier to
embed it in the APU.
Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
IMAGE SIGNAL PROCESSOR IN MOBILE
The dream of embedded ISP with the sensor
Samsung Exynos 9
Courtesy of Samsung
Snapdragon 845
Courtesy of Qualcomm
Apple A12
Courtesy of Apple
19. 1919Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
IMAGE SIGNAL PROCESSOR IN MOBILE – STACKED WITH CMOS
Cost and average selling price assumptions
ISP cost is around $1.50
Sony technology is advanced and we will assume
that the ASP of ISP for smartphones is equivalent
to this cost
Sony Xperia Z
20. 20
SMARTPHONE APPLICATION PROCESSORS
Why develop a dedicated unit to compute AI applications on the edge?
Processing AI
on the edge
makes data
handling
easier
Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
• Processing AI applications with a dedicated unit is faster and
consumes less energy.
• No dependence on internet connection: run applications
anywhere, anytime.
• Improvement of privacy: data are not sent to the Cloud and stay on
the device;
• Possible to use personal data to improve habits of use
• Less latency for critical applications like authentication
AI’s huge requirements
• High computational need
• Real-time
• Always-on
• Huge neural network
Mobile environment constraints
• Thermal efficiency
• Low consumption for long battery life
• Memory limitations
AI-accelerator dedicated unit
embedded in the AP
21. 2121Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
CENTRALIZATION AND SPECIALIZATION – APPLE AP EVOLUTION
Annotated A4 die photo
(source: MuAnalysis)
Annotated A6 die photo
(source: Chipworks)
Annotated A12 die photo
(source: TechInsights)Adding more and more elements
inside the same chip, and
introducing specialized
computing units
22. 22
2017 2018 2019e 2020e 2021e 2022e 2023e 2024e
Total smartphone shipments 1466.7 1428.9 1363.1 1331.7 1384.4 1414.0 1429.4 1423.9
Total smartphone with AI shipments 166.3 299.8 475.1 599.3 761.4 862.5 929.1 996.7
Penetration rate 11.3% 21.0% 34.9% 45.0% 55.0% 61.0% 65.0% 70.0%
0%
10%
20%
30%
40%
50%
60%
70%
80%
0.0
200.0
400.0
600.0
800.0
1000.0
1200.0
1400.0
1600.0
Penetrationrate
VolumeinMunits
Application Processors with AI-dedicated unit volume shipments and penetration
rate
• AI penetration in
smartphones is getting
very high, with a 50%
rate expected for mid-
2020
Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
APPLICATION PROCESSOR WITH AI-DEDICATED UNIT
A five years forecast
Clear risk for some competitors
to be kicked out of the AP
market by not integrating AI,
following the first wave with
Apple and Huawei and the 2nd
wave with Samsung, Qualcomm
→ One way to catch-up is to
focus on audio AI by integrating
a dedicated unit in the AP
23. Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020 23
• Revenues expected to grow modestly as the cost to manufacture APUs grows
while foundries and designers set prices to maintain margin
• APUs containing embedded AI accelerating hardware skew towards higher range of
ASP, so expect >$32B of 2024 revenue to be associated with AI-capable hardware
APPLICATION PROCESSOR REVENUE: GROWS TO $46B IN 2024
-
2
4
6
8
10
12
14
16
revenue($b)
Apple Samsung Qualcomm HiSilicon MediaTek Spreadtrum Other Forecast
“Revenue” is APU designers’ revenue
2019
$32
2024
$46
25. 2525Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
THEVALUE CHAIN FOLLOWS THE DATA FLOW
Sense
Sensor $0.1 - $1
Process
Hardware $1-$10
Skiing
99%
Compute
Hardware $10-$100
IP
License/Royalties
Analyze
Hardware >$1000
The output of the
Process step is of the
same type than the
input. Processing value is
measured through how
the Compute step is
facilitated
On top of the image/sound,
information are provided.The
quality and precision of this
information as a function of
the computing power defines
the value of the Compute
step
Maximum level of value is reached here, the Analyze step,
with dedicated information that are used for understanding
habits, center of interests,… for targeted ads
26. 2626
• Computing hardware for AI organizes around power consumption and performance
requirements
• Edge devices occupy the lower bands, and Automotive, HPC, and Robotics pushing the higher
limits of power and performance
• Imaging and AI in Automotive: Autonomy level correlates with computational
requirements
• Continued march toward higher levels of autonomy, but organized around different approaches
• ADAS solutions incrementally automate more driving sub-tasks, living within traditional
Auto ecosystem
• Robotic vehicles integrating the full stack, as a market-disrupting approach
• AI-related hardware generating ~$1B revenue in 2020, expecting >$13B in 2028, led by robotic
vehicles
• The next battleground for AD computing should see solutions with 10-50Tops at ~1Tops/W
• Imaging and AI in Smartphones: Improved AI/VP making its mark in the Silicon
• Roughly half of today’s smartphone application processors contain an embedded unit dedicated
to AI, growing to >70% in 2024, representing more than $32B in APU designer revenue
Computing for AI in automotive and smartphone, the leading edge applications | Yole Développement | VITF | March 2020
KEY TAKEAWAYS
What does the future hold?