For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-tschudi
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Yohann Tschudi, Technology and Market Analyst at Yole Développement, presents the "AI Is Moving to the Edge—What’s the Impact on the Semiconductor Industry?" tutorial at the May 2019 Embedded Vision Summit.
Artificial intelligence is proliferating into numerous edge applications and disrupting numerous industries. Clearly this represents a huge opportunity for technology suppliers. But it can be difficult to discern exactly what form this opportunity will take. For example, will edge devices perform AI computation locally, or in the cloud? Will edge devices use separate chips for AI, or will AI processing engines be incorporated into the main processor SoCs already used in these devices?
In this talk, Tschudi answers these questions by presenting and explaining his firm's market data and forecasts for AI processors in mobile phones, drones, smart home devices and personal robots. He explains why there is a strong trend towards executing AI computation at the edge, and quantifies the opportunity for separate processor chips and on-chip accelerators that address visual and audio AI tasks.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/03/market-analysis-on-socs-for-imaging-vision-and-deep-learning-in-automotive-and-mobile-markets-a-presentation-from-yole-developpement/
For more information about edge AI and vision, please visit:
http://www.edge-ai-vision.com
John Lorenz, Market and Technology Analyst for Computing and Software at Yole Développement, delivers the presentation “Market Analysis on SoCs for Imaging, Vision and Deep Learning in Automotive and Mobile Markets” at the Edge AI and Vision Alliance’s March 2020 Vision Industry and Technology Forum. Lorenz presents Yole Développement’s latest analysis on the evolution of SoCs for imaging, vision and deep learning.
Image Signal Processor and Vision Processor Market and Technology Trends 2019...Yole Developpement
Artificial intelligence-powered newcomers are reshuffling the pack.
More information on https://www.i-micronews.com/products/image-signal-processor-and-vision-processor-market-and-technology-trends-2019/
Artificial Intelligence Computing for Automotive 2019 Report by Yole Développ...Yole Developpement
Artificial Intelligence for automotive: why you should care.
More information on that report at: https://www.i-micronews.com/produit/artificial-intelligence-computing-for-automotive-2019/
The purpose of this project is to control robot with an interface board of the Raspberry Pi, sensors and software to full fill real time requirement.
Controlling DC motors, different sensors, camera interfacing with raspberry Pi using GPIO pin.
Live streaming, Command the robot easily, sends data of different sensors which works automatically or control from anywhere at any time.
Design of the website and control page of robot is done using Java tools and HTML. This system works on IOT concept.
This will enable Raspberry Pi to be used for more robotic applications and cut down the cost for building an IOT Robot.
Wi-Fi (Wireless Fidelity) is a generic term owned by "WiFi Alliance" which refers to any Wireless Local Area Networks (WLANs) based on IEEE 802.11 standard.
This presentation is prepared as reference of "E-Commerce Infrastructure" for BBA 6th Semester Students of Prime College. Document includes general introduction of WiFi Technology, WiFi Specification, advantages of WiFi and so on. Resources from various portals and slides from other authors has been used as reference.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/03/market-analysis-on-socs-for-imaging-vision-and-deep-learning-in-automotive-and-mobile-markets-a-presentation-from-yole-developpement/
For more information about edge AI and vision, please visit:
http://www.edge-ai-vision.com
John Lorenz, Market and Technology Analyst for Computing and Software at Yole Développement, delivers the presentation “Market Analysis on SoCs for Imaging, Vision and Deep Learning in Automotive and Mobile Markets” at the Edge AI and Vision Alliance’s March 2020 Vision Industry and Technology Forum. Lorenz presents Yole Développement’s latest analysis on the evolution of SoCs for imaging, vision and deep learning.
Image Signal Processor and Vision Processor Market and Technology Trends 2019...Yole Developpement
Artificial intelligence-powered newcomers are reshuffling the pack.
More information on https://www.i-micronews.com/products/image-signal-processor-and-vision-processor-market-and-technology-trends-2019/
Artificial Intelligence Computing for Automotive 2019 Report by Yole Développ...Yole Developpement
Artificial Intelligence for automotive: why you should care.
More information on that report at: https://www.i-micronews.com/produit/artificial-intelligence-computing-for-automotive-2019/
The purpose of this project is to control robot with an interface board of the Raspberry Pi, sensors and software to full fill real time requirement.
Controlling DC motors, different sensors, camera interfacing with raspberry Pi using GPIO pin.
Live streaming, Command the robot easily, sends data of different sensors which works automatically or control from anywhere at any time.
Design of the website and control page of robot is done using Java tools and HTML. This system works on IOT concept.
This will enable Raspberry Pi to be used for more robotic applications and cut down the cost for building an IOT Robot.
Wi-Fi (Wireless Fidelity) is a generic term owned by "WiFi Alliance" which refers to any Wireless Local Area Networks (WLANs) based on IEEE 802.11 standard.
This presentation is prepared as reference of "E-Commerce Infrastructure" for BBA 6th Semester Students of Prime College. Document includes general introduction of WiFi Technology, WiFi Specification, advantages of WiFi and so on. Resources from various portals and slides from other authors has been used as reference.
Qualcomm Webinar: Solving Unsolvable Combinatorial Problems with AIQualcomm Research
How do you find the best solution when faced with many choices? Combinatorial optimization is a field of mathematics that seeks to find the most optimal solutions for complex problems involving multiple variables. There are numerous business verticals that can benefit from combinatorial optimization, whether transport, supply chain, or the mobile industry.
More recently, we’ve seen gains from AI for combinatorial optimization, leading to scalability of the method, as well as significant reductions in cost. This method replaces the manual tuning of traditional heuristic approaches with an AI agent that provides a fast metric estimation.
In this presentation you will find out:
Why AI is crucial in combinatorial optimization
How it can be applied to two use cases: improving chip design and hardware-specific compilers
The state-of-the-art results achieved by Qualcomm AI Research
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/09/3d-sensing-market-and-industry-update-a-presentation-from-the-yole-group/
Florian Domengie, Senior Technology and Market Analyst at Yole Intelligence (part of the Yole Group), presents the “3D Sensing: Market and Industry Update” tutorial at the May 2023 Embedded Vision Summit.
While the adoption of mobile 3D sensing has slowed in Android phones, the market has still been growing fast, thanks to Apple. Apple is continuing to adopt 3D cameras for iPhones in both front and rear. Along the way, Apple has updated face ID and simplified and shrunk 3D camera optical structures. Meanwhile, due to Android phone OEMs mostly choosing not to incorporate 3D cameras, sensor suppliers and integrators have had to work hard to open up other consumer markets.
In addition to consumer markets, the use of 3D sensing has been blossoming in markets such as the industrial market and the nascent automotive market, where 3D sensing is increasingly used for advanced driver assistance systems and driver monitoring systems. In this talk, Domengie provides an overview of the main application, market, industry and technology trends of the 3D sensing industry.
Is wearable technology really the next big thing? Is it the next category of tech that will change the way we live? Or are smartwatches, fitness trackers, military gadgets and those snazzy Google Glass are a thing we’re supposed to accept as The Future?
Students will be able to comprehend the ideas of the Internet of Things and will be able to develop basic IoT applications
Explain about the Internet of Things (IoT) and its need and
also how it functions.
2. Identify the reasons that contributed to the development of IoT technology.
3. Use Real IoT protocols for communication.
4. Challenges in IoT
5. Different areas of IoT applications
6. Develop basic IoT Applications
Trends in semiconductor industry 2020 convertedJedeSmith
Semiconductor Review is a Semiconductor technology print magazine, features technology news, CIO/CXO articles & lists the top Semiconductor technology solution Provider.
Want similar presentation ideas? Interact and follow me in Quora : https://www.quora.com/profile/Liju-Thomas-13 or
Connect with me through Facebook : http://www.facebook.com
/lijuthomas24
Researchers have always tried to build a device capable of seeing people through walls. However, previous efforts to develop such a system have involved the use of expensive and bulky radar technology that uses a part of the electromagnetic spectrum only available to the military. Now a system is being developed by Dina Katabi and Fadel Adib, could give all of us the ability to spot people in different rooms using low-cost Wi-Fi technology. The device is low-power, portable and simple enough for anyone to use, to give people the ability to see through walls and closed doors. The system, called “Wi-Vi,” stands for "Wi-Fi" and "vision." is based on a concept similar to radar and sonar imaging. But in contrast to radar and sonar, it transmits a low-power Wi-Fi signal and uses its reflections to track moving humans. It can do so even if the humans are in closed rooms or hiding behind a wall.
Simple definition for Wi-Vi is, as a Wi-Fi signal is transmitted at a wall, a portion of the signal penetrates through it, reflecting off any humans on the other side. However, only a tiny fraction of the signal makes it through to the other room, with the rest being reflected by the wall, or by other objects. Wi-Vi cancels out all these other reflections, and keeps only those from the moving human body. Previous work demonstrated that the subtle reflections of wireless inter signals bouncing off a human could be used to track that person's movements, but those previous experiments either required that a wireless router was already in the room of the person being tracked. Wi-Fi signals and recent advances in MIMO communications are used to build a device that can capture the motion of humans behind a wall and in closed rooms. Law enforcement personnel can use the device to avoid walking into an ambush, and minimize casualties in standoffs and hostage situations. Emergency responders can use it to see through rubble and collapsed structures. Ordinary users can leverage the device for gaming, intrusion detection, privacy-enhanced monitoring of children and elderly, or personal security when stepping into dark alleys and unknown places.
The concept underlying seeing through opaque obstacles is similar to radar and sonar imaging. Specifically, when faced with a non-metallic wall, a fraction of the RF signal would traverse the wall, reflect off objects and humans, and come back imprinted with a signature of what is inside a closed room. By capturing these reflections, we can image objects behind a wall.
Wi-Vi is a see-through-wall technology that is low-bandwidth, low-power, compact, and accessible to non-military entities. Wi-Vi is a see-through-wall device that employs Wi-Fi signals in the 2.4 GHz ISM band.
LPWAN Technologies for Internet of Things (IoT) and M2M ScenariosPeter R. Egli
Rapid technological advances in the past made possible the miniaturization of network devices to meet the cost and power consumption requirements in IoT and M2M scenarios. What is missing in this picture is a radio technology with both long range capability and a very low cost footprint. Existing radio technologies such as 3G/4G or Short Range Radio do not aptly meet the requirements of IoT scenarios because they are either too expensive or are not able to provide the required range. Other wireless technologies are geared towards high bandwidth which is in most cases not a requirement for IoT.
Emerging LPWAN technologies such as ETSI LTN or LoRAWAN are poised for filling the gap by providing long range (up to 40km) and low power connectivity. These technologies allow low cost radio devices and operation thus enabling scaling up IoT applications.
How will AI impact the semiconductor market through consumer applications?
More information on that report at : https://www.i-micronews.com/report/product/hardware-and-software-for-ai-2018-consumer-focus.html
Computing and AI technologies for mobile and consumer applications 2021 - SampleYole Developpement
Penetrating everyday products will see the market for AI technologies for the consumer market reach $5.6B in 2026.
More information : https://www.i-micronews.com/products/computing-and-ai-technologies-for-mobile-and-consumer-applications-2021/
Qualcomm Webinar: Solving Unsolvable Combinatorial Problems with AIQualcomm Research
How do you find the best solution when faced with many choices? Combinatorial optimization is a field of mathematics that seeks to find the most optimal solutions for complex problems involving multiple variables. There are numerous business verticals that can benefit from combinatorial optimization, whether transport, supply chain, or the mobile industry.
More recently, we’ve seen gains from AI for combinatorial optimization, leading to scalability of the method, as well as significant reductions in cost. This method replaces the manual tuning of traditional heuristic approaches with an AI agent that provides a fast metric estimation.
In this presentation you will find out:
Why AI is crucial in combinatorial optimization
How it can be applied to two use cases: improving chip design and hardware-specific compilers
The state-of-the-art results achieved by Qualcomm AI Research
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/09/3d-sensing-market-and-industry-update-a-presentation-from-the-yole-group/
Florian Domengie, Senior Technology and Market Analyst at Yole Intelligence (part of the Yole Group), presents the “3D Sensing: Market and Industry Update” tutorial at the May 2023 Embedded Vision Summit.
While the adoption of mobile 3D sensing has slowed in Android phones, the market has still been growing fast, thanks to Apple. Apple is continuing to adopt 3D cameras for iPhones in both front and rear. Along the way, Apple has updated face ID and simplified and shrunk 3D camera optical structures. Meanwhile, due to Android phone OEMs mostly choosing not to incorporate 3D cameras, sensor suppliers and integrators have had to work hard to open up other consumer markets.
In addition to consumer markets, the use of 3D sensing has been blossoming in markets such as the industrial market and the nascent automotive market, where 3D sensing is increasingly used for advanced driver assistance systems and driver monitoring systems. In this talk, Domengie provides an overview of the main application, market, industry and technology trends of the 3D sensing industry.
Is wearable technology really the next big thing? Is it the next category of tech that will change the way we live? Or are smartwatches, fitness trackers, military gadgets and those snazzy Google Glass are a thing we’re supposed to accept as The Future?
Students will be able to comprehend the ideas of the Internet of Things and will be able to develop basic IoT applications
Explain about the Internet of Things (IoT) and its need and
also how it functions.
2. Identify the reasons that contributed to the development of IoT technology.
3. Use Real IoT protocols for communication.
4. Challenges in IoT
5. Different areas of IoT applications
6. Develop basic IoT Applications
Trends in semiconductor industry 2020 convertedJedeSmith
Semiconductor Review is a Semiconductor technology print magazine, features technology news, CIO/CXO articles & lists the top Semiconductor technology solution Provider.
Want similar presentation ideas? Interact and follow me in Quora : https://www.quora.com/profile/Liju-Thomas-13 or
Connect with me through Facebook : http://www.facebook.com
/lijuthomas24
Researchers have always tried to build a device capable of seeing people through walls. However, previous efforts to develop such a system have involved the use of expensive and bulky radar technology that uses a part of the electromagnetic spectrum only available to the military. Now a system is being developed by Dina Katabi and Fadel Adib, could give all of us the ability to spot people in different rooms using low-cost Wi-Fi technology. The device is low-power, portable and simple enough for anyone to use, to give people the ability to see through walls and closed doors. The system, called “Wi-Vi,” stands for "Wi-Fi" and "vision." is based on a concept similar to radar and sonar imaging. But in contrast to radar and sonar, it transmits a low-power Wi-Fi signal and uses its reflections to track moving humans. It can do so even if the humans are in closed rooms or hiding behind a wall.
Simple definition for Wi-Vi is, as a Wi-Fi signal is transmitted at a wall, a portion of the signal penetrates through it, reflecting off any humans on the other side. However, only a tiny fraction of the signal makes it through to the other room, with the rest being reflected by the wall, or by other objects. Wi-Vi cancels out all these other reflections, and keeps only those from the moving human body. Previous work demonstrated that the subtle reflections of wireless inter signals bouncing off a human could be used to track that person's movements, but those previous experiments either required that a wireless router was already in the room of the person being tracked. Wi-Fi signals and recent advances in MIMO communications are used to build a device that can capture the motion of humans behind a wall and in closed rooms. Law enforcement personnel can use the device to avoid walking into an ambush, and minimize casualties in standoffs and hostage situations. Emergency responders can use it to see through rubble and collapsed structures. Ordinary users can leverage the device for gaming, intrusion detection, privacy-enhanced monitoring of children and elderly, or personal security when stepping into dark alleys and unknown places.
The concept underlying seeing through opaque obstacles is similar to radar and sonar imaging. Specifically, when faced with a non-metallic wall, a fraction of the RF signal would traverse the wall, reflect off objects and humans, and come back imprinted with a signature of what is inside a closed room. By capturing these reflections, we can image objects behind a wall.
Wi-Vi is a see-through-wall technology that is low-bandwidth, low-power, compact, and accessible to non-military entities. Wi-Vi is a see-through-wall device that employs Wi-Fi signals in the 2.4 GHz ISM band.
LPWAN Technologies for Internet of Things (IoT) and M2M ScenariosPeter R. Egli
Rapid technological advances in the past made possible the miniaturization of network devices to meet the cost and power consumption requirements in IoT and M2M scenarios. What is missing in this picture is a radio technology with both long range capability and a very low cost footprint. Existing radio technologies such as 3G/4G or Short Range Radio do not aptly meet the requirements of IoT scenarios because they are either too expensive or are not able to provide the required range. Other wireless technologies are geared towards high bandwidth which is in most cases not a requirement for IoT.
Emerging LPWAN technologies such as ETSI LTN or LoRAWAN are poised for filling the gap by providing long range (up to 40km) and low power connectivity. These technologies allow low cost radio devices and operation thus enabling scaling up IoT applications.
How will AI impact the semiconductor market through consumer applications?
More information on that report at : https://www.i-micronews.com/report/product/hardware-and-software-for-ai-2018-consumer-focus.html
Computing and AI technologies for mobile and consumer applications 2021 - SampleYole Developpement
Penetrating everyday products will see the market for AI technologies for the consumer market reach $5.6B in 2026.
More information : https://www.i-micronews.com/products/computing-and-ai-technologies-for-mobile-and-consumer-applications-2021/
Status of the CMOS Image Sensor Industry 2016: New Dynamics in Market and Tec...Yole Developpement
New functions are pushing change in CMOS image sensors, boosting the market toward $18.8B in 2021 at 10.4% CAGR
Beyond $10B: The CMOS image sensor industry keeps growing at high pace
Driven by renewed mobile and automotive applications, the CMOS image sensor (CIS) industry is expected to expand at a compound annual growth rate (CAGR) of 10.4% from 2015 to 2021, reaching US$18.8B market value by 2021.
Yole Développement expects sustained growth of the CMOS image sensor industry for the next five years. Increasing camera content in smartphones will more than offset slower smartphone volume growth. The trend for dual and 3D cameras will have a major impact on CIS volumes. While it is too early to fully describe the strategy of the main actors yet, some products are already on the market. The 2016 report comprehensively covers key market and technology choices.
One big story this year is the consumer market, which is recovering from the total collapse of digital photography. While action cameras seem to have reached a ceiling, new applications such as drones, robots, virtual reality and augmented reality are ready to rejuvenate this emblematic market. The Automotive camera market has established itself as a key growth market for CIS. The Advanced Driver Assistance (ADAS) trend is further increasing pressure on vendors to provide sensors beyond their current technical capabilities. Image analysis is the new frontier and early usage of artificial intelligence is catching people’s imagination. We are therefore in the middle of an explosive growth pattern that will not slow down before 2021. An exceptionally high 23% CAGR is predicted in automotive for the 2015-2021 period.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/industry-analysis/video-interviews-demos/2d-and-3d-sensing-markets-applications-and-technologies-pre
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Guillaume Girardin, Photonics, Sensing and Display Division Director at Yole Développement, delivers the presentation "2D and 3D Sensing: Markets, Applications, and Technologies" at the Embedded Vision Alliance's September 2019 Vision Industry and Technology Forum. Girardin details the optical depth sensor market and application trends.
Proliferation of cameras for imaging and sensing is driving CMOS image sensor (CIS) growth.
More information on that report at : https://www.i-micronews.com/report/product/status-of-the-cis-industry-2018.html
Camera Module Industry 2017 Report by Yole Developpement Yole Developpement
New technologies and applications have restructured the Compact Camera Module industry
AT 12.2% CAGR FOR THE NEXT FIVE YEARS, THE COMPACT CAMERA MODULE INDUSTRY (CCM) IS A GROWTH POWERHOUSE WITH NUMEROUS LARGE COMPANIES THRIVING IN A DYNAMIC MARKET
In 2015, Yole Développement published its first report on the camera module industry and mentioned the immaturity of the ecosystem with numerous small players especially for module assembly. Now the dust has settled and giant camera module players have emerged such as LG Innotek, Semco, Foxconn Sharp, O-Film and Sunny Optical. This 2017 edition is giving you the insights into the trajectory of the industry and of more than 30 players serving mobile and other applications such as automotive and security.
Historically one could differentiate the faith of camera module market from the sub parts such as the image sensor, the lens and the autofocus or optical image stabilization system (Voice Coil Motors - VCM). It seems that differentiated growth has now ended and every sub segment is enjoying almost equal benefit from the rising market tide. This convergence is in part due to the end of quasi-monopoly from Sony in the image sensor sub-segment now joined by Samsung and Omnivision. The story is very similar for Largan Precision in the lens set sub-segment which is now facing renewed competition from Sunny Optical, Kantatsu and Genious Optical.
The last sub-domain of our interest in this report is VCMs. The growth of VCM companies has been undercut by dire structuration efforts. We had mentioned the inability of the VCM to serve the demand in the mobile market. Price pressures have changed the face of competition with competitors such as Mitsumi and Shicoh which were forced out and new players such as New Shicoh and Jawha to take center stage.
More information on that report at http://www.i-micronews.com/reports.html
Industrial, consumer, and automotive applications are driving the adoption of neuromorphic computing and sensing technologies. The first products are now hitting the market.
More information: https://www.i-micronews.com/products/neuromorphic-computing-and-sensing-2021/
Status of the CMOS Image Sensor Industry 2017 - Report by Yole DeveloppementYole Developpement
New applications are transforming the market and technology playing field for CMOS image sensors
It’s ten years down the line from the initial Apple iPhone that started the smartphone era. Since then, CMOS imaging has benefited from huge market demand and a technology-driven environment, resulting in an $11.6B industry in 2016. Photography and video is the main application, which is totally transformed by new use cases, new devices and new technologies.
The mobile market is key for the CMOS image sensor (CIS) industry. Despite saturation in the number of handsets, the CIS market has been able to maintain a 10.5% compound annual growth rate (CAGR) for the 2016-2022 period due to the introduction of dual and 3D cameras. These additional cameras are changing the industry’s drivers from form factor and image quality to interactivity.
Penetration into higher added value markets such as automotive, security and medical shows that CIS products are transforming use cases across the board. CIS technology adoption allows greater automation levels at low cost, while using newly available computing architectures such as deep learning. The CMOS image sensor industry is currently in a virtuous circle where a new technology is providing true customer value.
Presentation on iot market report, internet industry report, growth, overview, size, share, opportunities, company profiling, trends & forecast 2015-2021
Video content analysis and video analytics are other terms for intelligent video. It analyses video surveillance feeds automatically and extracts vital data, such as the detection of an intruder in photographs. Intelligent Video is commonly used for video motion detection, video pattern matching, and auto-tracking. Surveillance cameras are increasingly being used by security organizations to keep a close eye on the surrounding environment 24 hours a day, seven days a week. IP technology enables the creation of an open, trustworthy, and scalable surveillance system. While the amount of video data available grows, a person can only see a limited amount of it. People are notorious for losing concentration quickly, and suspicious gestures on the screen are frequently ignored. Intelligent video monitors around the clock, seven days a week, and improves monitoring accuracy and efficacy. Here's another application for Intelligent Video. It transforms video data into a gold mine for business requirements. The camera captures customer behavior and provides critical data for marketing, retail operations, building layout design, traffic patterns, and other activities. Going through hours of video from a dozen cameras was a difficult, labor-intensive, and time-consuming operation. The intelligent video quickly analyses large amounts of video data. Intelligent video is undoubtedly useful for monitoring and a variety of corporate tasks, but it is costly and difficult to implement because it necessitates high-performance computers and specialized software.
Kevin Yee, chair of MIPI Marketing Steering Group, and Ian Smith, MIPI technical content manager and author of the MIPI Alliance IoT White Paper, explain the advantages of using MIPI specifications within IoT devices and provide an overview of the MIPI specifications that are most relevant to the IoT market.
Being an innovator in mobile app services industry, we have delivered numerous apps that interact with external hardware devices through BLE technology and sensors.
Next-Generation Human Machine Interaction in Displays 2019 report by Yole Dév...Yole Developpement
Sensors directly integrated in displays: still a long way to wow!
More information on https://www.i-micronews.com/products/next-generation-human-machine-interaction-in-displays-2019/
More information on that report at http://www.i-micronews.com/reports.html
7 MEMS VALUE PROPOSITION IN MOBILE DEVICES
And 3D imaging is supposed to be the next big thing…
• High SNR
• Noise cancellation
• Voice recognition/activation
• Waterproofing
• Haptic feedback
• Gesture recognition
• Add dimensions to the interface
• 3D changing interface (microfluidic)
• High resolution imaging
• Liveness detection
• All environment detection (dry, wet, dirt)
• Anti-spoofing
• Mobile payment
• Multiple bandwidth handling (Worldphone)
• Low power consumption
• Low loss
• Accurate timing
• Accurate indoor positioning
• Accurate motion tracking
• Healthier life (sport, walking orientation)
• Danger and damage preventing
• Weather forecast/probe
pressure
smart building, automotive
With fingerprint
With sensor fusion
Activity monitoring 2020
With gyroscopes
With 3D
camera Enhanced communication
Gaming + 3D Avatar
With microphone
Mobile payment
Virtual Personal Assistance
Always-on Virtual Personal Assistance
With gas sensor Gas detec
Two industries controlled by giant companies with ~$200B in revenue OIS, microphone and dead reckoning sensors could drive the demand
apple, facebook, Google, samsung, autonomous vehicle
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2021/09/the-transformation-from-imaging-to-sensing-driving-a-market-revolution-a-presentation-from-yole-developpement/
Pierre Cambou, Principal Analyst at Yole Développement, presents the “Transformation from Imaging to Sensing: Driving a Market Revolution” tutorial at the May 2021 Embedded Vision Summit.
Over the past 20 years, digital imaging has grown to become a huge industry with a focus on producing images for human consumption. More recently, the emphasis has begun shifting to using images as sensory inputs to machines. In this talk, Cambou explores how this shift is transforming the imaging industry.
Cambou examines market dynamics in the mobile, consumer, computing, automotive, medical, security, industrial, and aerospace and defense segments. He explains how image sensor sales are being affected by this shift, using the example of 3D face recognition in mobile. He also discusses how image-related computing is being impacted. For example, while in the past most devices had to incorporate some kind of ISP, now the VPU is becoming the new imperative.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/deploying-large-models-on-the-edge-success-stories-and-challenges-a-presentation-from-qualcomm/
Vinesh Sukumar, Senior Director of Product Management at Qualcomm Technologies, presents the “Deploying Large Models on the Edge: Success Stories and Challenges” tutorial at the May 2024 Embedded Vision Summit.
In this talk, Dr. Sukumar explains and demonstrates how Qualcomm has been successful in deploying large generative AI and multimodal models on the edge for a variety of use cases in consumer and enterprise markets. He examines key challenges that must be overcome before large models at the edge can reach their full commercial potential. He also highlights how Qualcomm is addressing these challenges through upgraded processor hardware, improved developer tools and a comprehensive library of fully optimized AI models in the Qualcomm AI Hub.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/scaling-vision-based-edge-ai-solutions-from-prototype-to-global-deployment-a-presentation-from-network-optix/
Maurits Kaptein, Chief Data Scientist at Network Optix and Professor at the University of Eindhoven, presents the “Scaling Vision-based Edge AI Solutions: From Prototype to Global Deployment” tutorial at the May 2024 Embedded Vision Summit.
The Embedded Vision Summit brings together innovators in silicon, devices, software and applications and empowers them to bring computer vision and perceptual AI into reliable and scalable products. However, integrating recent hardware, software and algorithm innovations into prime-time-ready products is quite challenging. Scaling from a proof of concept—for example, a novel neural network architecture performing a valuable task efficiently on a new piece of silicon—to an AI vision system installed in hundreds of sites requires surmounting myriad hurdles.
First, building on Network Optix’s 14 years of experience, Professor Kaptein details how to overcome the networking, fleet management, visualization and monetization challenges that come with scaling a global vision solution. Second, Kaptein discusses the complexities of making vision AI solutions device-agnostic and remotely manageable, proposing an open standard for AI model deployment to edge devices. The proposed standard aims to simplify market entry for silicon manufacturers and enhance scalability for solution developers. Kaptein outlines the standard’s core components and invites collaborative contributions to drive market expansion.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/whats-next-in-on-device-generative-ai-a-presentation-from-qualcomm/
Jilei Hou, Vice President of Engineering and Head of AI Research at Qualcomm Technologies, presents the “What’s Next in On-device Generative AI” tutorial at the May 2024 Embedded Vision Summit.
The generative AI era has begun! Large multimodal models are bringing the power of language understanding to machine perception, and transformer models are expanding to allow machines to understand using multiple types of sensors. This new wave of approaches is poised to revolutionize user experiences, disrupt industries and enable powerful new capabilities. For generative AI to reach its full potential, however, we must deploy it on edge devices, providing improved latency, pervasive interaction and enhanced privacy.
In this talk, Hou shares Qualcomm’s vision of the compelling opportunities enabled by efficient generative AI at the edge. He also identifies the key challenges that the industry must overcome to realize the massive potential of these technologies. And he highlights research and product development work that Qualcomm is doing to lead the way via an end-to-end system approach—including techniques for efficient on-device execution of LLMs, LVMs and LMMs, methods for orchestration of large models at the edge and approaches for adaptation and personalization.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/learning-compact-dnn-models-for-embedded-vision-a-presentation-from-the-university-of-maryland-at-college-park/
Shuvra Bhattacharyya, Professor at the University of Maryland at College Park, presents the “Learning Compact DNN Models for Embedded Vision” tutorial at the May 2023 Embedded Vision Summit.
In this talk, Bhattacharyya explores methods to transform large deep neural network (DNN) models into effective compact models. The transformation process that he focuses on—from large to compact DNN form—is referred to as pruning. Pruning involves the removal of neurons or parameters from a neural network. When performed strategically, pruning can lead to significant reductions in computational complexity without significant degradation in accuracy. It is sometimes even possible to increase accuracy through pruning.
Pruning provides a general approach for facilitating real-time inference in resource-constrained embedded computer vision systems. Bhattacharyya provides an overview of important aspects to consider when applying or developing a DNN pruning method and presents details on a recently introduced pruning method called NeuroGRS. NeuroGRS considers structures and trained weights jointly throughout the pruning process and can result in significantly more compact models compared to other pruning methods.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/introduction-to-computer-vision-with-cnns-a-presentation-from-mohammad-haghighat/
Independent consultant Mohammad Haghighat presents the “Introduction to Computer Vision with Convolutional Neural Networks” tutorial at the May 2023 Embedded Vision Summit.
This presentation covers the basics of computer vision using convolutional neural networks. Haghighat begins by introducing some important conventional computer vision techniques and then transition to explaining the basics of machine learning and convolutional neural networks (CNNs) and showing how CNNs are used in visual perception.
Haghighat illustrates the building blocks and computational elements of neural networks through examples. This session provides an overview of how modern computer vision algorithms are designed, trained and used in real-world applications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/selecting-tools-for-developing-monitoring-and-maintaining-ml-models-a-presentation-from-yummly/
Parshad Patel, Data Scientist at Yummly, presents the “Selecting Tools for Developing, Monitoring and Maintaining ML Models” tutorial at the May 2023 Embedded Vision Summit.
With the boom in tools for developing, monitoring and maintaining ML models, data science teams have many options to choose from. Proprietary tools provided by cloud service providers are enticing, but teams may fear being locked in—and may worry that these tools are too costly or missing important features when compared with alternatives from specialized providers.
Fortunately, most proprietary, fee-based tools have an open-source component that can be integrated into a home-grown solution at low cost. This can be a good starting point, enabling teams to get started with modern tools without making big investments and leaving the door open to evolve tool selection over time. In this talk, Patel presents a step-by-step process for creating an MLOps tool set that enables you to deliver maximum value as a data scientist. He shares how Yummly built pipelines for model development and put them into production using open-source projects.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/building-accelerated-gstreamer-applications-for-video-and-audio-ai-a-presentation-from-wave-spectrum/
Abdo Babukr, Accelerated Computing Consultant at Wave Spectrum, presents the “Building Accelerated GStreamer Applications for Video and Audio AI,” tutorial at the May 2023 Embedded Vision Summit.
GStreamer is a popular open-source framework for creating streaming media applications. Developers often use GStreamer to streamline the development of computer vision and audio perception applications. Since perceptual algorithms are often quite demanding in terms of processing performance, in many cases developers need to find ways to accelerate key GStreamer building blocks, taking advantage of specialized features of their target processor or co-processor.
In this talk, Babukr introduces GStreamer and shows how to use it to build computer vision and audio perception applications. He also shows how to create efficient, high-performance GStreamer applications that utilize specialized hardware features.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/understanding-selecting-and-optimizing-object-detectors-for-edge-applications-a-presentation-from-walmart-global-tech/
Md Nasir Uddin Laskar, Staff Machine Learning Engineer at Walmart Global Tech, presents the “Understanding, Selecting and Optimizing Object Detectors for Edge Applications” tutorial at the May 2023 Embedded Vision Summit.
Object detectors count objects in a scene and determine their precise locations, while also labeling them. Object detection plays a crucial role in many vision applications, from autonomous driving to smart appliances. In many of these applications, it’s necessary or desirable to implement object detection at the edge.
In this presentation, Laskar explores the evolution of object detection algorithms, from traditional approaches to deep learning-based methods and transformer-based architectures. He delves into widely used approaches for object detection, such as two-stage R-CNNs and one-stage YOLO algorithms, and examines their strengths and weaknesses. And he provides guidance on how to evaluate and select an object detector for an edge application.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/introduction-to-modern-lidar-for-machine-perception-a-presentation-from-the-university-of-ottawa/
Robert Laganière, Professor at the University of Ottawa and CEO of Sensor Cortek, presents the “Introduction to Modern LiDAR for Machine Perception” tutorial at the May 2023 Embedded Vision Summit.
In this presentation, Laganière provides an introduction to light detection and ranging (LiDAR) technology. He explains how LiDAR sensors work and their main advantages and disadvantages. He also introduces different approaches to LiDAR, including scanning and flash LiDAR.
Laganière explores the types of data produced by LiDAR sensors and explains how this data can be processed using deep neural networks. He also examines the synergy between LiDAR and cameras, and the concept of pseudo-LiDAR for detection.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/11/vision-language-representations-for-robotics-a-presentation-from-the-university-of-pennsylvania/
Dinesh Jayaraman, Assistant Professor at the University of Pennsylvania, presents the “Vision-language Representations for Robotics” tutorial at the May 2023 Embedded Vision Summit.
In what format can an AI system best present what it “sees” in a visual scene to help robots accomplish tasks? This question has been a long-standing challenge for computer scientists and robotics engineers. In this presentation, Jayaraman provides insights into cutting-edge techniques being used to help robots better understand their surroundings, learn new skills with minimal guidance and become more capable of performing complex tasks.
Jayaraman discusses recent advances in unsupervised representation learning and explains how these approaches can be used to build visual representations that are appropriate for a controller that decides how the robot should act. In particular, he presents insights from his research group’s recent work on how to represent the constituent objects and entities in a visual scene, and how to combine vision and language in a way that permits effectively translating language-based task descriptions into images depicting the robot’s goals.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/adas-and-av-sensors-whats-winning-and-why-a-presentation-from-techinsights/
Ian Riches, Vice President of the Global Automotive Practice at TechInsights, presents the “ADAS and AV Sensors: What’s Winning and Why?” tutorial at the May 2023 Embedded Vision Summit.
It’s clear that the number of sensors per vehicle—and the sophistication of these sensors—is growing rapidly, largely thanks to increased adoption of advanced safety and driver assistance features. In this presentation, Riches explores likely future demand for automotive radars, cameras and LiDARs.
Riches examines which vehicle features will drive demand out to 2030, how vehicle architecture change is impacting the market and what sorts of compute platforms these sensors will be connected to. Finally, he shares his firm’s vision of what the landscape could look like far beyond 2030, considering scenarios out to 2050 for automated driving and the resulting sensor demand.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/computer-vision-in-sports-scalable-solutions-for-downmarkets-a-presentation-from-sportlogiq/
Mehrsan Javan, Co-founder and CTO of Sportlogiq, presents the “Computer Vision in Sports: Scalable Solutions for Downmarket Leagues” tutorial at the May 2023 Embedded Vision Summit.
Sports analytics is about observing, understanding and describing the game in an intelligent manner. In practice, this requires a fully automated, robust end-to-end pipeline, spanning from visual input, to player and group activities, to player and team evaluation to planning. Despite major advancements in computer vision and machine learning, today sports analytics solutions are limited to top leagues and are not widely available for downmarket leagues and youth sports.
In this talk, Javan explains how his company has developed scalable and robust computer vision solutions to democratize sport analytics and offer pro-league-level insights to leagues with modest resources, including youth leagues. He highlights key challenges—such as the requirement for low-cost, low-latency processing and the need for robustness despite variations in venues. He discusses the approaches Sportlogiq tried and how it ultimately overcame these challenges, including the use of transformers and fusion of multiple type of data streams to maximize accuracy.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/detecting-data-drift-in-image-classification-neural-networks-a-presentation-from-southern-illinois-university/
Spyros Tragoudas, Professor and School Director at Southern Illinois University Carbondale, presents the “Detecting Data Drift in Image Classification Neural Networks” tutorial at the May 2023 Embedded Vision Summit.
An unforeseen change in the input data is called “drift,” and may impact the accuracy of machine learning models. In this talk, Tragoudas presents a novel scheme for diagnosing data drift in the input streams of image classification neural networks. His proposed method for drift detection and quantification uses a threshold dictionary for the prediction probabilities of each class in the neural network model.
The method is applicable to any drift type in images such as noise and weather effects, among others. Tragoudas shares experimental results on various data sets, drift types and neural network models to show that his proposed method estimates the drift magnitude with high accuracy, especially when the level of drift significantly impacts the model’s performance.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/deep-neural-network-training-diagnosing-problems-and-implementing-solutions-a-presentation-from-sensor-cortek/
Fahed Hassanat, Chief Operating Officer and Head of Engineering at Sensor Cortek, presents the “Deep Neural Network Training: Diagnosing Problems and Implementing Solutions” tutorial at the May 2023 Embedded Vision Summit.
In this presentation, Hassanat delves into some of the most common problems that arise when training deep neural networks. He provides a brief overview of essential training metrics, including accuracy, precision, false positives, false negatives and F1 score.
Hassanat then explores training challenges that arise from problems with hyperparameters, inappropriately sized models, inadequate models, poor-quality datasets, imbalances within training datasets and mismatches between training and testing datasets. To help detect and diagnose training problems, he also covers techniques such as understanding performance curves, recognizing overfitting and underfitting, analyzing confusion matrices and identifying class interaction issues.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/ai-start-ups-the-perils-of-fishing-for-whales-war-stories-from-the-entrepreneurial-front-lines-a-presentation-from-seechange-technologies/
Tim Hartley, Vice President of Product for SeeChange Technologies, presents the “AI Start-ups: The Perils of Fishing for Whales (War Stories from the Entrepreneurial Front Lines)” tutorial at the May 2023 Embedded Vision Summit.
You have a killer idea that will change the world. You’ve thought through product-market fit and differentiation. You have seed funding and a world-beating team. Best of all, you’ve caught the attention of major players in your industry. You’ve reached peak “start-up”—that point of limitless possibility—when you go to bed with the same level of energy and enthusiasm you had when you woke. And then the first proof of concept starts…
In this talk, Hartley lays out some of the pitfalls that await those building the next big thing. Using real examples, he shares some of the dos and don’ts, particularly when dealing with that big potential first customer. Hartley discusses the importance of end-to-end design, ensuring your product solves real-world problems. He explores how far the big companies will tell you to jump—and then jump again—for free. And, most importantly, how to build long-term partnerships with major corporations without relying on over-promising sales pitches.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/a-computer-vision-system-for-autonomous-satellite-maneuvering-a-presentation-from-scout-space/
Andrew Harris, Spacecraft Systems Engineer at SCOUT Space, presents the “Developing a Computer Vision System for Autonomous Satellite Maneuvering” tutorial at the May 2023 Embedded Vision Summit.
Computer vision systems for mobile autonomous machines experience a wide variety of real-world conditions and inputs that can be challenging to capture accurately in training datasets. Few autonomous systems experience more challenging conditions than those in orbit. In this talk, Harris describes how SCOUT Space has designed and trained satellite vision systems using dynamic and physically informed synthetic image datasets.
Harris describes how his company generates synthetic data for this challenging environment and how it leverages new real-world data to improve our datasets. In particular, he explains how these synthetic datasets account for and can replicate real sources of noise and error in the orbital environment, and how his company supplements them with in-space data from the first SCOUT-Vision system, which has been in orbit since 2021.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/bias-in-computer-vision-its-bigger-than-facial-recognition-a-presentation-from-santa-clara-university/
Susan Kennedy, Assistant Professor of Philosophy at Santa Clara University, presents the “Bias in Computer Vision—It’s Bigger Than Facial Recognition!” tutorial at the May 2023 Embedded Vision Summit.
As AI is increasingly integrated into various industries, concerns about its potential to reproduce or exacerbate bias have become widespread. While the use of AI holds the promise of reducing bias, it can also have unintended consequences, particularly in high-stakes computer vision applications such as facial recognition. However, even seemingly low-stakes computer vision applications such as identifying potholes and damaged roads can also present ethical challenges related to bias.
This talk explores how bias in computer vision often poses an ethical challenge, regardless of the stakes involved. Kennedy discusses the limitations of technical solutions aimed at mitigating bias, and why “bias-free” AI may not be achievable. Instead, she focuses on the importance of adopting a “bias-aware” approach to responsible AI design and explores strategies that can be employed to achieve this.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/sensor-fusion-techniques-for-accurate-perception-of-objects-in-the-environment-a-presentation-from-sanborn-map-company/
Baharak Soltanian, Vice President of Research and Development for the Sanborn Map Company, presents the “Sensor Fusion Techniques for Accurate Perception of Objects in the Environment” tutorial at the May 2023 Embedded Vision Summit.
Increasingly, perceptual AI is being used to enable devices and systems to obtain accurate estimates of object locations, speeds and trajectories. In demanding applications, this is often best done using a heterogeneous combination of sensors (e.g., vision, radar, LiDAR). In this talk, Soltanian introduces techniques for combining data from multiple sensors to obtain accurate information about objects in the environment.
Soltanian briefly introduces the roles played by Kalman filters, particle filters, Bayesian networks and neural networks in this type of fusion. She then examines alternative fusion architectures, such as centralized and decentralized approaches, to better understand the trade-offs associated with different approaches to sensor fusion as used to enhance the ability of machines to understand their environment.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/updating-the-edge-ml-development-process-a-presentation-from-samsara/
Jim Steele, Vice President of Embedded Software at Samsara, presents the “Updating the Edge ML Development Process” tutorial at the May 2023 Embedded Vision Summit.
Samsara (NYSE:IOT) is focused on digitizing the world of operations. The company helps customers across many industries—including food and beverage, utilities and energy, field services and government—get information about their physical operations into the cloud, so they can operate more safely, efficiently and sustainably. Samsara’s sensors collect billions of data points per day and on-device processing is instrumental to its success. The company is constantly developing, improving and deploying ML models at the edge.
Samsara has found that the traditional development process—where ML scientists create models and hand them off to firmware engineers for embedded implementation—is slow and often produces difficult-to-resolve differences between the original model and the embedded implementation. In this talk, Steele presents an alternative development process that his company has adopted with good results. In this process, firmware engineers develop a general framework that ML scientists use to design, develop and deploy their models. This enables quick iterations and fewer confounding bugs.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/combating-bias-in-production-computer-vision-systems-a-presentation-from-red-cell-partners/
Alex Thaman, Chief Architect at Red Cell Partners, presents the “Combating Bias in Production Computer Vision Systems” tutorial at the May 2023 Embedded Vision Summit.
Bias is a critical challenge in predictive and generative AI that involves images of humans. People have a variety of body shapes, skin tones and other features that can be challenging to represent completely in training data. Without attention to bias risks, ML systems have the potential to treat people unfairly, and even to make humans more likely to do so.
In this talk, Thaman examines the ways in which bias can arise in visual AI systems. He shares techniques for detecting bias and strategies for minimizing it in production AI systems.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.