Virtual reality will provide the ultimate level of immersion, creating a sense of physical presence in real or imagined worlds.
This Qualcomm presentation describes:
• The unprecedented experiences and unlimited possibilities that VR enables.
• Why VR is happening now and how mobile technologies are accelerating VR adoption.
• The extreme requirements for immersive VR in terms of visual quality, sound quality, and intuitive interactions.
• Why Qualcomm Technologies is uniquely positioned to meet these extreme requirements and deliver superior mobile VR experiences
Learn more at: https://www.qualcomm.com/VR
Download the presentation at: https://www.qualcomm.com/documents/making-immersive-virtual-reality-possible-mobile
Sign up for our mobile computing newsletter at: https://www.qualcomm.com/invention/technologies/mobile-computing/signup
Extended reality (XR)—which includes AR, VR, and everything in between—is already providing revolutionary experiences today. Taking XR to the next level of immersion within the power and thermal constraints of a sleek mobile device is a critical challenge. What if we combine the power-efficient, latency-sensitive on-device rendering and tracking of the XR headset with the partial rendering capabilities of the edge cloud over a low-latency 5G link? We get boundless mobile XR experiences with photorealistic visuals. Making this vision a reality will require the entire XR and 5G ecosystem coming together.
The path to personalized, on-device virtual assistantQualcomm Research
Machine learning has ignited the voice UI and virtual assistant revolution as machine speech recognition approaches the accuracy of humans. The AI powering key voice UI components, such as automatic speech recognition and natural language processing, has traditionally run in the cloud due to computing, storage, and power constraints. However, on-device processing of voice UI provides unique benefits, such as instant response, reliability, and privacy. And fusing multiple on-device sensor inputs, such as camera and accelerometers, in addition to microphones adds a level of personalization that will take us closer to a true personal assistant.
Augmented Reality (AR) will be the next mobile computing platform, seamlessly merging the real world with virtual objects to support realistic, intelligent, and personalized experiences. Making this vision possible requires the next level of immersion, artificial intelligence, and connectivity within the thermal and power envelope of a wearable glasses.
Learn more at: https://www.qualcomm.com/invention/cognitive-technologies/immersive-experiences/augmented-reality
Extended reality (XR) is already providing revolutionary experiences today, and it’s important for developers to know all the latest advances. Taking your XR development to the next level of immersion within the power and thermal constraints of a mobile device is a critical challenge. A new era in distributed computing powered by 5G, on-device processing, and edge cloud processing could offer a solution. What if we combine the power-efficient, latency-sensitive on-device rendering and tracking of the XR headset with the partial rendering capabilities of the edge cloud over a 5G link with low latency and high quality-of-service? We get boundless mobile XR experiences with photorealistic visuals. Making this vision a reality for developers will require the entire XR and 5G ecosystem coming together.
Artificial intelligence (AI) is reshaping our lives, and it’s a fantastic time and opportunity for developers working in this area. One example is in the voice UI and virtual assistant revolution, where machine learning and machine speech recognition approaches the accuracy of humans. The AI powering key voice UI components, such as automatic speech recognition and natural language processing, has traditionally run in the cloud due to computing, storage, and power constraints. However, on-device processing of voice UI provides unique benefits, such as instant response, reliability, and privacy. And developers can take advantage of on-device processing by fusing multiple on-device sensor inputs, such as camera and accelerometers, in addition to microphones to add a level of personalization to your AI development project.
XR viewers are a new category of AR or VR devices that allow for lighter and smaller designs since they are connected to smartphones or other computer accessories. For more details, check out this great webinar, which has been adapted from a presentation we gave at AWE 2019.
Turning Augmented/Virtual Reality hype Into Actual RealityAmitabh Kumar
Finally getting around to publish my presentation at Linley conference on hardware challenges and possible solutions to convert Augmented/Virtual Reality hype Into Actual Reality. Hardware is hard
If you want to hear my talk: https://www.youtube.com/watch?v=0VNV8BzB408
Welcome your comments and feedback
Extended reality (XR)—which includes AR, VR, and everything in between—is already providing revolutionary experiences today. Taking XR to the next level of immersion within the power and thermal constraints of a sleek mobile device is a critical challenge. What if we combine the power-efficient, latency-sensitive on-device rendering and tracking of the XR headset with the partial rendering capabilities of the edge cloud over a low-latency 5G link? We get boundless mobile XR experiences with photorealistic visuals. Making this vision a reality will require the entire XR and 5G ecosystem coming together.
The path to personalized, on-device virtual assistantQualcomm Research
Machine learning has ignited the voice UI and virtual assistant revolution as machine speech recognition approaches the accuracy of humans. The AI powering key voice UI components, such as automatic speech recognition and natural language processing, has traditionally run in the cloud due to computing, storage, and power constraints. However, on-device processing of voice UI provides unique benefits, such as instant response, reliability, and privacy. And fusing multiple on-device sensor inputs, such as camera and accelerometers, in addition to microphones adds a level of personalization that will take us closer to a true personal assistant.
Augmented Reality (AR) will be the next mobile computing platform, seamlessly merging the real world with virtual objects to support realistic, intelligent, and personalized experiences. Making this vision possible requires the next level of immersion, artificial intelligence, and connectivity within the thermal and power envelope of a wearable glasses.
Learn more at: https://www.qualcomm.com/invention/cognitive-technologies/immersive-experiences/augmented-reality
Extended reality (XR) is already providing revolutionary experiences today, and it’s important for developers to know all the latest advances. Taking your XR development to the next level of immersion within the power and thermal constraints of a mobile device is a critical challenge. A new era in distributed computing powered by 5G, on-device processing, and edge cloud processing could offer a solution. What if we combine the power-efficient, latency-sensitive on-device rendering and tracking of the XR headset with the partial rendering capabilities of the edge cloud over a 5G link with low latency and high quality-of-service? We get boundless mobile XR experiences with photorealistic visuals. Making this vision a reality for developers will require the entire XR and 5G ecosystem coming together.
Artificial intelligence (AI) is reshaping our lives, and it’s a fantastic time and opportunity for developers working in this area. One example is in the voice UI and virtual assistant revolution, where machine learning and machine speech recognition approaches the accuracy of humans. The AI powering key voice UI components, such as automatic speech recognition and natural language processing, has traditionally run in the cloud due to computing, storage, and power constraints. However, on-device processing of voice UI provides unique benefits, such as instant response, reliability, and privacy. And developers can take advantage of on-device processing by fusing multiple on-device sensor inputs, such as camera and accelerometers, in addition to microphones to add a level of personalization to your AI development project.
XR viewers are a new category of AR or VR devices that allow for lighter and smaller designs since they are connected to smartphones or other computer accessories. For more details, check out this great webinar, which has been adapted from a presentation we gave at AWE 2019.
Turning Augmented/Virtual Reality hype Into Actual RealityAmitabh Kumar
Finally getting around to publish my presentation at Linley conference on hardware challenges and possible solutions to convert Augmented/Virtual Reality hype Into Actual Reality. Hardware is hard
If you want to hear my talk: https://www.youtube.com/watch?v=0VNV8BzB408
Welcome your comments and feedback
For the full video of this presentation, please visit:
https://www.embedded-vision.com/industry-analysis/video-interviews-demos/embedded-vision-augmented-reality-trends-and-opportunities-
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Jon Peddie of Jon Peddie Research delivers the presentation "Embedded Vision in Augmented Reality: Trends and Opportunities" at the February 2017 Embedded Vision Alliance Member Meeting. Peddie presents highlights of his firm’s recent research on opportunities and challenges for embedded vision in augmented reality.
Perception of reality can now be managed by combining many areas of technology. Particularly, use of Virtual Reality (VR) and Augmented Reality (AR) is gaining traction and it will have increasing impact on entertainment, education and businesses in coming years. It is therefore essential to understand the underlying technologies and explore the software development options to harness the unfolding opportunities. The talk attempts to demystify the topic and highlight some of the important aspects.
Augmented reality is a technology that works on computer vision based recognition algorithms to augmented sound, video, graphics and other sensor based inputs on real world objects using camera of your device.
Project Soli, a new, robust, high-resolution, low-power, miniature gesture sensing technology for human-computer interaction based on millimeter-wave radar
Demonstrates the usage of all available iOS sensors with source code. Example use-cases are a compass, air level, navigation, acceleration and audio recording and playback.
Augmented reality is a type of virtual reality that aims to duplicate the world’s environment in a computer. An augmented reality system generates a composite view for the user that is the combination of the real scene viewed by the user and a virtual scene generated by the computer.
Augmented Reality; mostly confused with virtual reality is a completely different concept and is extensively implemented in various leading companies' R&D departments to experiment with design and performance characteristics.
This case study showcases Mistral's capability in providing a ZigBee technology based solution for a Universal Remote Controller. The product is now going into large volume production by the customer.
Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are "augmented" by computer-generated or extracted real-world sensory input such as sound, video, graphics, haptics or GPS data.[1] It is related to a more general concept called computer-mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. Augmented reality enhances one’s current perception of reality, whereas in contrast, virtual reality replaces the real world with a simulated one.
A lecture on Mobile Augmented Reality. A lecture given by Mark Billinghurst at the University of Canterbury on Friday September 13th 2013. This is part of the COSC 426 graduate course on Augmented Reality.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/industry-analysis/video-interviews-demos/embedded-vision-augmented-reality-trends-and-opportunities-
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Jon Peddie of Jon Peddie Research delivers the presentation "Embedded Vision in Augmented Reality: Trends and Opportunities" at the February 2017 Embedded Vision Alliance Member Meeting. Peddie presents highlights of his firm’s recent research on opportunities and challenges for embedded vision in augmented reality.
Perception of reality can now be managed by combining many areas of technology. Particularly, use of Virtual Reality (VR) and Augmented Reality (AR) is gaining traction and it will have increasing impact on entertainment, education and businesses in coming years. It is therefore essential to understand the underlying technologies and explore the software development options to harness the unfolding opportunities. The talk attempts to demystify the topic and highlight some of the important aspects.
Augmented reality is a technology that works on computer vision based recognition algorithms to augmented sound, video, graphics and other sensor based inputs on real world objects using camera of your device.
Project Soli, a new, robust, high-resolution, low-power, miniature gesture sensing technology for human-computer interaction based on millimeter-wave radar
Demonstrates the usage of all available iOS sensors with source code. Example use-cases are a compass, air level, navigation, acceleration and audio recording and playback.
Augmented reality is a type of virtual reality that aims to duplicate the world’s environment in a computer. An augmented reality system generates a composite view for the user that is the combination of the real scene viewed by the user and a virtual scene generated by the computer.
Augmented Reality; mostly confused with virtual reality is a completely different concept and is extensively implemented in various leading companies' R&D departments to experiment with design and performance characteristics.
This case study showcases Mistral's capability in providing a ZigBee technology based solution for a Universal Remote Controller. The product is now going into large volume production by the customer.
Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are "augmented" by computer-generated or extracted real-world sensory input such as sound, video, graphics, haptics or GPS data.[1] It is related to a more general concept called computer-mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. Augmented reality enhances one’s current perception of reality, whereas in contrast, virtual reality replaces the real world with a simulated one.
A lecture on Mobile Augmented Reality. A lecture given by Mark Billinghurst at the University of Canterbury on Friday September 13th 2013. This is part of the COSC 426 graduate course on Augmented Reality.
Full immersion is achieved by simultaneously focusing on the broader dimensions of visual quality, sound quality, and intuitive interactions. This presentation discusses how:
- Technology improvements continue to drive more immersive experiences, especially for VR and AR
- High Dynamic Range (HDR) will enhance the visual quality on all our screens
- Scene-based audio is a new paradigm for 3D audio
- Natural user interfaces like voice, gestures, and eye tracking are making interactions more intuitive
Building the Matrix: Your First VR App (SVCC 2016)Liv Erickson
The slides from my talk, Building The Matrix: Your First VR App at Silicon Valley Code Camp, Oct. 2016. Development, design, and sample projects for virtual reality applications.
this presentation covers the very aspects of creating the virtual environments and also gives a small tutorial on how to create AR apps to create custom synthetic environments.
A highly versatile piece of technology, the Nikon D4 digital camera has helped to revolutionize twenty-first century picture-taking.
This presentation briefly provides an overview of the digital camera--its advantages and costs to the consumer.
Virtual reality has emerged as a popular and effective marketing tool for brands looking to create an intimate, enjoyable experience for their target audience. Learn a little bit more about this innovative new technology!
Virtual reality has emerged as an effective and popular marketing opportunity for brands eager to form an intimate connection with their audience. Learn a little bit more about the innovative new technology.
Generative AI models, such as ChatGPT and Stable Diffusion, can create new and original content like text, images, video, audio, or other data from simple prompts, as well as handle complex dialogs and reason about problems with or without images. These models are disrupting traditional technologies, from search and content creation to automation and problem solving, and are fundamentally shaping the future user interface to computing devices. Generative AI can apply broadly across industries, providing significant enhancements for utility, productivity, and entertainment. As generative AI adoption grows at record-setting speeds and computing demands increase, on-device and hybrid processing are more important than ever. Just like traditional computing evolved from mainframes to today’s mix of cloud and edge devices, AI processing will be distributed between them for AI to scale and reach its full potential.
In this presentation you’ll learn about:
- Why on-device AI is key
- Full-stack AI optimizations to make on-device AI possible and efficient
- Advanced techniques like quantization, distillation, and speculative decoding
- How generative AI models can be run on device and examples of some running now
- Qualcomm Technologies’ role in scaling on-device generative AI
As generative AI adoption grows at record-setting speeds and computing demands increase, hybrid processing is more important than ever. But just like traditional computing evolved from mainframes and thin clients to today’s mix of cloud and edge devices, AI processing must be distributed between the cloud and devices for AI to scale and reach its full potential. In this talk you’ll learn:
• Why on-device AI is key
• Which generative AI models can run on device
• Why the future of AI is hybrid
• Qualcomm Technologies’ role in making hybrid AI a reality
Qualcomm Webinar: Solving Unsolvable Combinatorial Problems with AIQualcomm Research
How do you find the best solution when faced with many choices? Combinatorial optimization is a field of mathematics that seeks to find the most optimal solutions for complex problems involving multiple variables. There are numerous business verticals that can benefit from combinatorial optimization, whether transport, supply chain, or the mobile industry.
More recently, we’ve seen gains from AI for combinatorial optimization, leading to scalability of the method, as well as significant reductions in cost. This method replaces the manual tuning of traditional heuristic approaches with an AI agent that provides a fast metric estimation.
In this presentation you will find out:
Why AI is crucial in combinatorial optimization
How it can be applied to two use cases: improving chip design and hardware-specific compilers
The state-of-the-art results achieved by Qualcomm AI Research
- There is a rich roadmap of 5G technologies coming in the second half of the 5G decade with the 5G Advanced evolution
- 6G will be the future innovation platform for 2030 and beyond building on the 5G Advanced foundation
- 6G will be more than just a new radio design, expanding the role of AI, sensing and others in the connected intelligent edge
- Qualcomm is leading cutting-edge wireless research across six key technology vectors on the path to 6G
3D perception is crucial for understanding the real world. It offers many benefits and new capabilities over 2D across diverse applications, from XR and autonomous driving to IOT, camera, and mobile. 3D perception with machine learning is creating the new state of the art (SOTA) in areas, such as depth estimation, object detection, and neural scene representation. Making these SOTA neural networks feasible for real-world deployment on mobile devices constrained by power, thermal, and performance has been a challenge. Qualcomm AI Research has developed not only novel AI techniques for 3D perception but also full-stack AI optimizations to enable real-world deployments and energy-efficient solutions. This presentation explores the latest research that is enabling efficient 3D perception while maintaining neural network model accuracy. You’ll learn about:
- The advantages of 3D perception over 2D and the need for 3D perception across applications
- Advancements in 3D perception research by Qualcomm AI Research
- Our future 3D perception research directions
5G is going mainstream across the globe, and this is an exciting time to harness the low latency and high capacity of 5G to enable the metaverse. A distributed-compute architecture across device and cloud can enable rich extended reality (XR) user experiences. Virtual reality (VR) and mixed reality (MR) are ready for deployment in private networks, while augmented reality (AR) for wide area networks can be enabled in the near term with Wi-Fi powered AR glasses paired with a 5G-enabled phone. Device APIs enabling application adaptation is critical for good user experience. 5G standards are evolving to support the deployment of AR glasses at a large scale and setting the stage for 6G-era with the merging of the physical, digital, and virtual worlds. Techniques like perception-enhanced wireless offer significant potential to improve user experience. Qualcomm Technologies is enabling the XR industry with platforms, developer SDKs, and reference designs.
Check out this webinar to learn:
• How 5G and distributed-compute architectures enable the metaverse
• The latest results from our boundless XR 5G/6G testbed, including device APIs and perception-enhanced wireless
• 5G standards evolution for enhancing XR applications and the road to 6G
• How Qualcomm Technologies is enabling the industry with platforms, SDKs, and reference designs
AI model efficiency is crucial for making AI ubiquitous, leading to smarter devices and enhanced lives. Besides the performance benefit, quantized neural networks also increase power efficiency for two reasons: reduced memory access costs and increased compute efficiency.
The quantization work done by the Qualcomm AI Research team is crucial in implementing machine learning algorithms on low-power edge devices. In network quantization, we focus on both pushing the state-of-the-art (SOTA) in compression and making quantized inference as easy to access as possible. For example, our SOTA work on oscillations in quantization-aware training that push the boundaries of what is possible with INT4 quantization. Furthermore, for ease of deployment, the integer formats such as INT16 and INT8 give comparable performance to floating point, i.e., FP16 and FP8, but have significantly better performance-per-watt performance. Researchers and developers can make use of this quantization research to successfully optimize and deploy their models across devices with open-sourced tools like AI Model Efficiency Toolkit (AIMET).
Presenters: Tijmen Blankevoort and Chirag Patel
Bringing AI research to wireless communication and sensingQualcomm Research
AI for wireless is already here, with applications in areas such as mobility management, sensing and localization, smart signaling and interference management. Recently, Qualcomm Technologies has prototyped the AI-enabled air interface and launched the Qualcomm 5G AI Suite. These developments are possible thanks to expertise in both wireless and machine learning from over a decade of foundational research in these complementing fields.
Our approach brings together the modeling flexibility and computational efficiency of machine learning and the out-of-domain generalization and interpretability of wireless domain expertise.
In this webinar, Qualcomm AI Research presents an overview of state-of-the-art research at the intersection of the two fields and offers a glimpse into the future of the wireless industry.
Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.
Speakers:
Arash Behboodi, Machine Learning Research Scientist (Senior Staff Engineer/Manager), Qualcomm AI Research Daniel Dijkman, Machine Learning Research Scientist (Principal Engineer), Qualcomm AI Research
How will sidelink bring a new level of 5G versatility.pdfQualcomm Research
Today, the 5G system mainly operates on a network-to-device communication model, exemplified by enhanced mobile broadband use cases where all data transmissions are between the network (i.e., base station) and devices (e.g., smartphone). However, to fully deliver on the original 5G vision of supporting diverse devices, services, and deployment scenarios, we need to expand the 5G topology further to reach new levels of performance and efficiency.
That is why sidelink communication was introduced in 3GPP standards, designed to facilitate direct communication between devices, independent of connectivity via the cellular infrastructure. Beyond automotive communication, it also benefits many other 5G use cases such as IoT, mobile broadband, and public safety.
5G is designed to serve an unprecedented range of capabilities with a single global standard. With enhanced mobile broadband (eMBB), massive IoT (mIoT), and mission-critical IoT, the three pillars of 5G represent extremes in performance and associated complexity. For IoT services, NB-IoT and eMTC devices prioritize low power consumption and the lowest complexity for wide-area deployments (LPWA), while enhanced ultra-reliable, low-latency communication (eURLLC), along with time-sensitive networking (TSN), delivers the most stringent use case requirements. But there exists an opportunity to more efficiently address a broad range of mid-tier applications with capabilities ranging between these extremes.
In 5G NR Release 17, 3GPP introduced a new tier of reduced capability (RedCap) devices, also known as NR-Light. It is a new device platform that bridges the capability and complexity gap between the extremes in 5G today with an optimized design for mid-tier use cases. With the recent standards completion, NR-Light is set to efficiently expand the 5G universe to connect new frontiers.
Download this presentation to learn:
• What NR-Light is and why it can herald the next wave of 5G expansion
• How NR-Light is accelerating the growth of the connected intelligent edge
• Why NR-Light is a suitable 5G migration path for mid-tier LTE devices
Realizing mission-critical industrial automation with 5GQualcomm Research
Manufacturers seeking better operational efficiencies, with reduced downtime and higher yield, are at the leading edge of the Industry 4.0 transformation. With mobile system components and reliable wireless connectivity between them, flexible manufacturing systems can be reconfigured quickly for new tasks, to troubleshoot issues, or in response to shifts in supply and demand.
There is a long history of R&D collaboration between Bosch Rexroth and Qualcomm Technologies for the effective application of these 5G capabilities to industrial automation use cases. At the Robert Bosch Elektronik GmbH factory in Salzgitter, Germany, this collaboration has reached new heights.
Download this deck to learn how:
• Qualcomm Technologies and Bosch Rexroth are collaborating to accelerate the Industry 4.0 transformation
• 5G technologies deliver key capabilities for mission-critical industrial automation
• Distributed control solutions can work effectively across 5G TSN networks
• A single 5G technology platform solves connectivity and positioning needs for flexible manufacturing
3GPP Release 17: Completing the first phase of 5G evolutionQualcomm Research
This presentation summarizes 5G NR Release 17 projects that was completed in March 2022. It further enhances 5G foundation and expands into new devices, use cases, verticals.
AI firsts: Leading from research to proof-of-conceptQualcomm Research
AI has made tremendous progress over the past decade, with many advancements coming from fundamental research from many decades ago. Accelerating the pipeline from research to commercialization has been daunting since scaling technologies in the real world faces many challenges beyond the theoretical work done in the lab. Qualcomm AI Research has taken on the task of not only generating novel AI research but also being first to demonstrate proof-of-concepts on commercial devices, enabling technology to scale in the real world. This presentation covers:
The challenges of deploying cutting-edge research on real-world mobile devices
How Qualcomm AI Research is solving system and feasibility challenges with full-stack optimizations to quickly move from research to commercialization
Examples where Qualcomm AI Research has had industrial or academic firsts
Setting off the 5G Advanced evolution with 3GPP Release 18Qualcomm Research
In December 2021, 3GPP has reached a consensus on the scope of 5G NR Release 18. This is a significant milestone marking the beginning of 5G Advanced — the second wave of wireless innovations that will fulfill the 5G vision. Release 18 will build on the solid foundation set by Releases 15, 16, and 17, and it sets the longer-term evolution direction of 5G and beyond. This release will encompass a wide range of new and enhancement projects, ranging from improved MIMO and application of AI/ML-enabled air interface to extended reality optimizations and broader IoT support.
Cellular networks have facilitated positioning in addition to voice or data communications from the beginning, since 2G, and we’ve since grown to rely on positioning technology to make our lives safer, simpler, more productive, and even fun. Cellular positioning complements other technologies to operate indoors and outdoors, including dense urban environments where tall buildings interfere with satellite positioning. It works whether we’re standing still, walking, or in a moving vehicle. With 5G, cellular positioning breaks new ground to bring robust precise positioning indoors and outdoors, to meet even the most demanding Industry 4.0 needs.
As we look to the future, the Connected Intelligent Edge will bring a new dimension of positional insight to a broad range of devices, improving wireless use cases still under development. We’re already charting the course to 5G Advanced and beyond by working on the evolution of cellular positioning technology to include RF sensing for situational awareness.
Download the deck to learn more.
The need for intelligent, personalized experiences powered by AI is ever-growing. Our devices are producing more and more data that could help improve our AI experiences. How do we learn and efficiently process all this data from edge devices while maintaining privacy? On-device learning rather than cloud training can address these challenges. In this presentation, we’ll discuss:
- Why on-device learning is crucial for providing intelligent, personalized experiences without sacrificing privacy
- Our latest research in on-device learning, including few-shot learning, continuous learning, and federated learning
- How we are solving system and feasibility challenges to move from research to commercialization
This presentation outlines the synergistic nature of 5G and AI -- two disruptive areas of innovations that can change the world. It illustrates the benefits of adopting AI for the advancements of 5G, as well as showcases the latest progress made by Qualcomm Technologies, Inc.
Data compression has increased by leaps and bounds over the years due to technical innovation, enabling the proliferation of streamed digital multimedia and voice over IP. For example, a regular cadence of technical advancement in video codecs has led to massive reduction in file size – in fact, up to a 1000x reduction in file size when comparing a raw video file to a VVC encoded file. However, with the rise of machine learning techniques and diverse data types to compress, AI may be a compelling tool for next-generation compression, offering a variety of benefits over traditional techniques. In this presentation we discuss:
- Why the demand for improved data compression is growing
- Why AI is a compelling tool for compression in general
- Qualcomm AI Research’s latest AI voice and video codec research
- Our future AI codec research work and challenges
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
3. 3
Virtual reality
will provide the
ultimate level of
immersion
Offering unprecedented experiences
and unlimited possibilities
4. 4
• Draw you in…
• Take you to another place…
• Keep you present in the moment…
Experiences worth having,
remembering, and reliving
Immersive
Experiences
6. 6
VR will provide the ultimate level of immersion
Creating physical presence in real or imagined worlds
Interactions
So intuitive that they
become second nature
Sounds
So accurate that
they are true to life
Visuals
So vibrant that they are
eventually indistinguishable
from the real world
7. 7
Experiences in VR
VR will be the new paradigm for how we interact with the world
Offering unprecedented experiences and unlimited possibilities
Play Learn Communicate
Immersive movies
and shows
Live concerts,
sports, and
other events
Interactive gaming
and entertainment
Immersive
education
Training and demos
3D design and art
Social interactions
Shared personal
moments
Empathetic
storytelling
11. 11
Similar underlying
technologies but distinct
experiences
Virtual reality
is not augmented
reality Simulates physical presence in real
or imagined worlds, and enables the
user to interact in that world
Virtual reality
Superimposes content over the real
world such that the content appears
to a viewer to be part of the
real-world scene
Augmented reality
13. 13
The time is right for VR
Technologies and ecosystem are now aligning
Ecosystem
drivers
Device availability
Software infrastructure
Content creation and deployment
Technology
advancements
Multimedia & AI technologies
Display and sensor technologies
Power and thermal efficiency
14. 1414
VR headsets are becoming available
Mobile VR headsets will drive mass adoption and provide the freedom to enjoy VR anywhere
Continuum of VR experiences
Smartphone powered
• Smartphone plugs into or
connects to the headset
• Mobile SoC powers VR experience
Standalone
• Dedicated headset optimized for VR
• Mobile SoC powers VR experience
Mobile VR headsets
PC or game console controlled
• Headset connects by wire to a
PC or game console
• Desktop-class CPU and GPU power
the VR experience
Tethered VR headsets
15. 15
The software infrastructure and tools are ready
A solid foundation exists and momentum is building
Qualcomm Snapdragon is a product of Qualcomm Technologies, Inc.
Software stack
optimized for VR
OS optimizations to
better manage device
resources
• Hardware, software,
and peripherals
Operating system
Tools and SDKs
Tools and SDKs to
generate, debug, and
optimize content, such as:
• Google Cardboard SDK, Oculus
Mobile SDK, Qualcomm®
Snapdragon™ VR SDK
• 360° video processing tools
Optimized middleware
• Gaming engines like Unity
and Unreal Engine
• Audio engines and libraries
• 360° video players
Middleware
Drivers
Optimized low-level drivers
for VR requirements
• System-level latency reduction
• Peripheral tuning
• API acceleration
16. 16
Content is being generated and deployed
Content developers are experimenting with VR and see its potential as a new medium
Content generation
Games and apps
• Finding the killer apps through experimentation
• A variety of compelling experiences already exist, from
first person shooters to virtual chat rooms, education, and
3D sculpting
Video
• Cinematic VR, such as the life of a refugee or a concert
• Broadcast TV, such as the presidential debate, sports
events, and comedy shows
• User generated content
• Premium streaming video providers, such as
Netflix and Hulu
Content distribution
App stores
• App aggregation and distribution through stores, such as:
◦ Google Play Store with Google Cardboard apps
◦ Oculus Store and Oculus Share
◦ HTC Viveport app store
Video distribution
• Upload and stream video from places, such as:
◦ YouTube 360
◦ 360 video on Facebook
17. 17
Exponential technology advancements
are making VR possible
• Graphics processing
• Audio processing
• Video processing
Multimedia
technologies
• Displays with increased pixel density,
power efficiency, and visual quality
• Smaller, lower power, and lower cost
sensors without sacrificing accuracy
Display and
sensor technologies
• Architecture innovations, such as
heterogeneous computing
• Optimized algorithms
• Integration efficiency, including
better transistors
Power and
thermal efficiency
18. 18
The mobile industry is accelerating VR adoption
Scale
Innovation at scale and
cost advantage
Rapid design cycles
Fast adoption of cutting
edge technologies
Mass adoption
Broad appeal for mainstream
consumers
20. 20
Immersive virtual reality has extreme requirements
Achieving full immersion at low power to enable a comfortable, sleek form factor
Visual
quality
Sound
quality
Intuitive
interactions
Immersion
High resolution audio
Up to human hearing capabilities
3D audio
Realistic 3D, positional, surround
audio that is accurate to the real world
Precise motion tracking
Accurate on-device motion tracking
Minimal latency
Minimized system latency
to remove perceptible lag
Natural user interfaces
Seamlessly interact with VR using
natural movements, free from wires
Extreme pixel quantity and quality
Screen is very close to the eyes
Spherical view
Look anywhere with a
full 360° spherical view
Stereoscopic display
Humans see in 3D
21. 21
Immersive virtual reality has extreme requirements
Achieving full immersion at low power to enable a comfortable, sleek form factor
Sound
quality
Intuitive
interactions
Immersion
High resolution audio
Up to human hearing capabilities
3D audio
Realistic 3D, positional, surround
audio that is accurate to the real world
Precise motion tracking
Accurate on-device motion tracking
Minimal latency
Minimized system latency
to remove perceptible lag
Natural user interfaces
Seamlessly interact with VR using
natural movements, free from wires
Visual
quality
Extreme pixel quantity and quality
Screen is very close to the eyes
Spherical view
Look anywhere with a
full 360° spherical view
Stereoscopic display
Humans see in 3D
22. 22
Extreme pixel quantity and quality are required
The screen is very close to the eyes and a 360° spherical view is necessary
• As the device is brought closer to your eyes, the
screen takes up more of your FOV
• Biconvex lenses magnify the screen further and make
the virtual world your entire FOV
• As the screen takes up more of your FOV, pixel density
must increase
• Otherwise, you will see individual pixels – the screen
door effect
Screen-door effect
• For immersive VR, our entire FOV needs to be
the virtual world
• Each human eye has ~145° horizontal FOV
• The fovea of the eye can see ~60 pixels per degree
(PPD) but comprises less than 1% of the retinal size
• To look anywhere in the virtual world, VR needs to
provide full 360° spherical view
Field of view (FOV)Field of view
23. 23
Foveated rendering significantly reduces pixel processing
• Rather than rendering with high
resolution throughout an image, just
render high where the eye is fixated.
• The GPU renders a small rectangle at
a high resolution and the rest of the
FOV at a lower resolution.
• Foveated rendering will help minimize
power, while improving performance
and visual quality.
The human eye can only see high
resolution where the fovea is focused
High resolution
throughout the
image
High
resolution
everywhere
Foveated rendering
based on the eyes
being fixated on the
paraglider
High
resolution
selectively
High resolution
Low resolution
24. 24
Lens correction for improved visual quality
Fixing lens distortion and chromatic aberration
Chromatic aberrationLens distortion
Problem: After
passing through
the lens, colors
are focused at
different positions
in the focal plane.
Problem:
A wide-angle
biconvex lens
creates a
pincushion
distortion
Warped imageRendered image Pincushion
distortion
Barrel-warped
image
Rendered imagePincushion
distortion
Solution:
Barrel warp
compensates
for pincushion
distortion
Barrel
warp
Solution: Image
processing
compensates for
chromatic aberration.
The GPU parameters
are determined
through lens
characterization.
Chromatic
correction
Out of
focus
Rendered image
In focus
Corrected image
26. 26
Generating and consuming 360° spherical video
VR headsets need to support multiple
360° spherical video formats
Generate video
• Simultaneously capture video with multiple cameras
from different views to generate 360° spherical video.
Stereoscopic video doubles the number of cameras
• Undistort, stitch together, and map the discrete
images to a equirectangular or cube map format
• Encode video
Playback video
• Decode video
• Based on format, apply an equirectangular or
cube map UV projection
• Determine pose and show appropriate view
of 360° spherical video
Discrete unstitched
camera images for 360°
spherical view
Equirectangular
image
Cube map image
Left eye VR headset view
27. 27
Stereoscopic display to see the world in 3D
Binocular vision helps the brain determine depth
• Each eye rotates and focuses to see
an object clearly, resulting in slightly
different viewpoints.
• Based on the different viewpoints and
by knowing the interpupillary distance,
the brain determines depth.
• This stereoscopic effect makes the
VR experience more immersive.
• For VR, we need to generate the
appropriate view for each eye
Stereoscopic visuals
Image
shift
Interpupillary
distance
Need to generate left and
right eye image appropriately
Right eye
Left eye
28. 28
Accurate and efficient stereoscopy for realistic visuals
Graphics
• OpenGL ES multiview extension
support
• A single draw call generates triangles
for both eyes
• Driver and app overhead is reduced
Video
• For stereoscopic video, support of the
multiview extension of HEVC codec
• Approximately 2X the decode work
since there is a video stream per eye
• For monoscopic video, the same
image is shown to both eyes, shifted
for binocular disparity
Left eye Right eye
29. 29
Visual
quality
Extreme pixel quantity and quality
Screen is very close to the eyes
Spherical view
Look anywhere with a
full 360° spherical view
Stereoscopic display
Humans see in 3D
Achieving full immersion at low power to enable a comfortable, sleek form factor
Intuitive
interactions
Immersion
Precise motion tracking
Accurate on-device motion tracking
Minimal latency
Minimized system latency
to remove perceptible lag
Natural user interfaces
Seamlessly interact with VR using
natural movements, free from wires
Immersive virtual reality has extreme requirements
Sound
quality
High resolution audio
Up to human hearing capabilities
3D audio
Realistic 3D, positional, surround
audio that is accurate to the real world
30. 30
3D positional audio for realistic sound
Accurate 3D surround sound based on your head’s position relative to various sound sources
• Sound arrives to each ear at the
accurate time and with the correct
intensity
• HRTF (head related transfer
function):
o Takes into account typical human
facial and body characteristics, like
location, shape, and size of ears.
o Is a function of frequency and
three spatial variables.
• Sound appropriately adjusts
dynamically as your head and the
sound sources move
31. 31
Reverberation for realistic sound
Sound reflections spread and interact with the environment appropriately
• Reverberation is function of sound
frequency, material absorption,
room volume, and room surface
area.
• Different rooms reflect and
absorb sound differently, such
as a hallway or cave versus an
open space.
• Accurate reverberation makes the
experience more immersive.
32. 32
Qualcomm® Snapdragon™ 845 processor provides realistic
sound quality for VR
Processing performance at low power and low latency
Qualcomm Snapdragon, Qualcomm Hexagon, and Fluence are products of Qualcomm Technologies, Inc.
Qualcomm®
Hexagon™ DSP
• High performance at
low power
• Low latency
• CPU offload
• Customer algorithms
Noise
filtering
• Fluence™ noise filtering
• Active noise cancellation
3D positional
audio
• Support next-gen codecs,
like MPEG-H 3D Audio
and Dolby Atmos
• HRTF support
High fidelity
audio
• 24-bit at 192 kHz
• Real-time
convolutional reverb
• 18 ms playback
33. 33
Sound
quality
High resolution audio
Up to human hearing capabilities
3D audio
Realistic 3D, positional, surround
audio that is accurate to the real world
Immersive virtual reality has extreme requirements
Achieving full immersion at low power to enable a comfortable, sleek form factor
Visual
quality
Extreme pixel quantity and quality
Screen is very close to the eyes
Spherical view
Look anywhere with a
full 360° spherical view
Stereoscopic display
Humans see in 3D
Immersion
Intuitive
interactions Precise motion tracking
Accurate on-device motion tracking
Minimal latency
Minimized system latency
to remove perceptible lag
Natural user interfaces
Seamlessly interact with VR using
natural movements, free from wires
34. 3434
Precise motion tracking
of head movements
For accurate and intuitive interactions
with the virtual world
• “In which direction am I looking”
• Detect rotational movement
• Main benefit: look around the virtual world
from a fixed point
3 degrees of freedom (3-DOF)
X
Z
Y
Pitch
Yaw
Roll
• “Where am I and in which direction am
I looking”
• Detect rotational movement and translational
movement
• Main benefit: move freely in the virtual
world and look around corners
6 degrees of freedom (6-DOF)
35. 35
Achieving precise head motion tracking on the device
Visual inertial odometry (VIO) for rapid and accurate 6-DOF pose
6-DOF position
and orientation
(aka “6-DOF pose”)
Captured from tracking
camera image sensor
at ~30 fps
Monocular
camera data
Accelerometer and
gyroscope data
Sampled from
external sensors at
800 / 1000 Hz
Camera feature
processing
Inertial data
processing
Snapdragon “VIO” subsystem
Hexagon DSP algorithms
• Camera and inertial sensor
data fusion
• Continuous localization
• Accurate, high-rate “pose”
generation & prediction
36. 36
Low latency Noticeable latency
Lag prevents immersion and can cause discomfort
Minimizing motion to photon latency is crucial for immersion
36
Significant lagNo lag
37. 373737
An end-to-end approach is required to minimize latency
Many workloads must run efficiently for an immersive VR experience
Sensor sampling and fusion
Head pose generation
Motion detection
Visual processing
View generation
Render / decode
Display
Adjustment to latest pose (time warp)
Quality enhancement and display
Total time (motion to photon latency) for
steps must be less than 20 milliseconds
“Photon” (new pixels’ light emitted from the screen)“Motion”
38. 38
VR content requires an enhanced wireless connection
High bandwidth connectivity to share and consume VR content
Non-VR
Fixed view
Monoscopic
Up to 4K
VR
360° spherical
Stereoscopic
Higher resolution
HDRWireless
connection
Higher bandwidth
required
39. 39
Great connectivity is the foundation of mobile experiences
The Qualcomm® Snapdragon™ 845 processor provides connectivity at high bandwidth and low latency
Qualcomm Snapdragon is a product of Qualcomm Technologies, Inc.
Advanced LTE/Wi-Fi
convergence
• LTE + Wi-Fi aggregation
• Antenna sharing
• Advanced antenna design
Advanced Wi-Fi
• 11ac MU-MIMO
• 11ad Wi-Fi
• Seamless access across bands
Advanced 4G LTE
• Up to 1.2 Gbps downlink
• Up to 150 Mbps uplink
• Support for LAA
40. 4040
Taking VR experiences to the next level with 5G
Continued 4G LTE advancements on the path to a more capable 5G platform
Share real-time,
interactive
experiences
Events,
meetings,
telepresence, …
Learn more about our vision for the future of mobile networks: www.qualcomm.com/5G
Enjoy VR experiences
everywhere
At home, at work,
at school,
in the car,
at the airport, …
Extreme throughput
multi-gigabits per second
Ultra-low latency
down to 1ms latency
Uniform experience
with much more capacity
All while supporting new levels of
cost and energy efficiency
41. 41
Power and thermal efficiency for VR tasks is essential
The VR headset needs to be comfortable to wear for extended periods
Constrained mobile
wearable environment
Sleek, ultra-light
Long battery life
Thermal efficiency
VR workloads
Compute intensive
Diverse characteristics
42. 42
A heterogeneous computing approach is needed for VR
Snapdragon 845 utilizes specialized engines across the SoC for efficient processing
Virtual reality
Computer vision, image processing, sensor
processing, graphics, video processing,
location, and cloud interaction
Qualcomm Spectra, Qualcomm Snapdragon, Qualcomm Adreno, Qualcomm Hexagon, and Qualcomm Kryo are products of Qualcomm Technologies, Inc.
Qualcomm
Spectra™
280 ISP
Qualcomm®
Hexagon™
682 DSP
LPDDR4X
Memory
Display
Processor
(DPU)
Qualcomm®
Adreno™ 630 GPU
Qualcomm®
Kryo™ 385 CPU
Qualcomm®
Snapdragon™ X20
LTE Modem
Video Processor
(VPU)
* Not to scaleHigh-utilization
Qualcomm
Spectra™
280 ISP
Qualcomm®
Hexagon™
682 DSP
LPDDR4X
Memory
Display
Processor
(DPU)
Qualcomm®
Adreno™ 630 GPU
Qualcomm®
Kryo™ 385 CPU
Qualcomm®
Snapdragon™ X20
LTE Modem
Video Processor
(VPU)
Entire SoC is used!
43. 43
Qualcomm Snapdragon, Qualcomm TruPalette, Qualcomm Low-Power Picture Enhancement, Fluence, Qualcomm Adreno, Qualcomm Spectra, Qualcomm Hexagon, Qualcomm Artificial Intelligence, Qualcomm Aqstic, and
Qualcomm Snapdragon VR SDK are products of Qualcomm Technologies, Inc.
Qualcomm® Snapdragon™ 845 processor
is ideal for mobile VR
Designed to meet the VR
processing demands
within the thermal and
power constraints Visual
quality
Sound
quality
Intuitive
interactions
Immersion
Positional audio
& 3D surround sound
Fluence™ noise filtering
& active noise cancellation
Low level DSP access & tools
for custom audio development
Integrated dual-camera ISP + DSP
for low power 3D reconstruction &
predictive 6-DOF motion tracking
Ultra-fast sensing for minimal
motion to photon latency
Smooth, 3D stereoscopic,
foveated rendering, & support
for the latest GPU APIs
Low power 360° 4K HEVC
video decoding & display
Qualcomm® TruPalette™ display gamut
mapping, color enhancement, etc.
Qualcomm Low-Power Picture
Enhancement compression, variable
refresh, etc.
Qualcomm® Adreno™ Visual Processing | Qualcomm Spectra™ ISP | Qualcomm® Hexagon™ DSP
Qualcomm Artificial Intelligence Engine | Qualcomm Aqstic™ audio | Qualcomm Snapdragon VR SDK | Snapdragon tools
44. 4444
Qualcomm® Snapdragon™ VR SDK
Access to advanced VR features to optimize applications and simplify development
Benefits:
Stereoscopic rendering
Generate left and right eye view
DSP sensor fusion
Access to the latest and
predictive head pose
Asynchronous time warp
Warp image based on the latest
head pose just prior to scan out
Power & thermal management
Qualcomm® Symphony System Manager
provides CPU, GPU, and DSP power, thermal,
and performance management
Lens distortion correction
Barrel warp image based on lens
characteristics
Simplified development | Optimized VR performance | Power and thermal efficiency
Single buffer rendering
Render directly to the display buffer
for immediate display scan out
Chromatic aberration correction
Correct color distortion
based on lens characteristics
APIs optimized
for VR
VR layering
Generate UI menus and text so that
they render correctly in a virtual world
Qualcomm Symphony System Manager and Qualcomm Snapdragon are products of Qualcomm Technologies, Inc.
45. 45
Offering superior VR development and optimization tools
Enabling content creation and tuned devices
Qualcomm Snapdragon, Qualcomm Adreno, Qualcomm Hexagon, Qualcomm Symphony System Manager, Qualcomm Display Color Management, Qualcomm Audio Calibration Tool, Qualcomm Commercial Analysis Toolkit,
and Qualcomm eXtensible Diagnostic Monitor are products of Qualcomm Technologies, Inc.
Other ecosystem
enablement
• Development devices
Commercial devices
• Customer support
Customer engineering support
Content
creation tools
• Specialized solutions for VR
development
Qualcomm® Snapdragon™ VR SDK
• Other relevant solutions
Qualcomm® Adreno™ SDK:
Graphics/Compute SDK
Qualcomm® Hexagon™ SDK: DSP SDK
Qualcomm® Symphony System Manager
SDK: Heterogeneous compute SDK
• Optimization and tuning
Snapdragon Profiler
• Optimal third-party middleware engines
Unity & Unreal Engine
Device
optimization tools
• Calibration and tuning
Qualcomm® Display Color Management
Qualcomm® Audio Calibration Tool
• Analysis and debugging
Qualcomm® Commercial Analysis Toolkit
Qualcomm® eXtensible Diagnostic Monitor
46. 46
QTI is uniquely
positioned to
support superior
VR experiences
Custom designed SoCs
and investments in the core
VR technologies
47. 47
Mobile VR evolution
Devices will become sleeker, lighter, and more fashionable
Google
Cardboard HMD
Sleek
HMD
Imperceptible
device?Slot-in
Pixel density & quality
Continued improvements in…
Power efficiency
Cost efficiencyIntuitive interactions
Sound quality
48. 48
Within device constraints
QTI is uniquely positioned to support superior VR experiences
Providing efficient, comprehensive solutions
Snapdragon is a product of Qualcomm Technologies, Inc.
Immersive VR experiences Commercialization
Development time
Sleek form factor
Power and thermal efficiency
Cost
Via Snapdragon™
solutions
• Efficient heterogeneous
computing architecture
• Custom designed
processing engines
• Comprehensive solutions
across tiers
Via ecosystem
enablement
• Snapdragon
development platforms
• App developer tools
• Ecosystem collaboration
Visual quality
• Consistent, accurate color
• High resolution and frame rate
• Stereoscopic and spherical
display
Sound quality
• Positional audio
• 3D surround sound
• Noise filtering
Intuitive interactions
• Minimized system latency
• Precise motion tracking
• Intelligent, contextual
interactions
49. 4949
VR is here today
The mobile industry is
accelerating VR adoption
Qualcomm®
Snapdragon™ 845 processor is ideal
for immersive mobile VR
Start developing
https://developer.qualcomm.com
Learn more
https://www.qualcomm.com/VR
Contact us
https://developer.qualcomm.com/contact
Qualcomm Technologies will continue
to drive VR technologies
Qualcomm Snapdragon is a product of Qualcomm Technologies, Inc.