This document discusses using light field video to provide true presence for 6-DOF VR playback. It explains that while current 360 video provides 360 degree immersion and stereo vision, light field video adds motion parallax, convergence and accommodation by capturing the full light field volume. This allows for a photorealistic experience with true parallax as the viewer moves their head. The document outlines the light field video pipeline from capture to playback and discusses challenges of data size, compression and real-time view synthesis performance. It concludes that light field volumetric video brings us much closer to achieving a true holodeck-like presence in VR.
The document discusses learning frontend development for Android. It notes that with over 2 billion web users and 4 billion mobile users, combining web and mobile has huge potential. Android is highlighted as a good option, with over 81% of the mobile market share. Key facts about Android are provided, such as it being open source, having over 900 million activated devices, and receiving OTA updates. The document recommends learning Android through online tutorials, blogs, and Android's developer website for documentation and training.
This presentation is part of a series of Hints and Tips on Budget 360 degree video making. Part 1 of the presentation covers the reasons for choosing the Ricoh Theta S 360 camera and some illustrations of how this camera works
Joe Hill (Lucid VR): 3D VR Storytelling: When to Use 180° and 360° Degree Con...AugmentedWorldExpo
A talk from the Consumer Track at AWE USA 2017 - the largest conference for AR+VR in Santa Clara, California May 31- June 2, 2017.
Joe Hill (Lucid VR): 3D VR Storytelling: When to Use 180° and 360° Degree Content for VR Production
3D VR Storytelling is becoming more and more compelling whether for commercial or creative uses. As 3D VR content goes into higher demand, there’s lot of interest on how to best create and scale content. At Lucid VR, we’ve found that leveraging both 180 degree and 360 content can be strategic in crafting the right story that you’re trying to tell. In this session, we’ll look into case studies where 180 degree and 360 degree content is best used.
http://AugmentedWorldExpo.com
This document discusses various types of cinematography equipment used in schools including tripods, gorilla pods, dollies, sliders, jib arms, and different camera types. It also explains how ISO settings can be adjusted to brighten or darken video and images, with higher ISO numbers producing brighter footage. Key equipment mentioned are body mounted jibs for steady shots while mounted, dollies for moving shots with steady footage, and gorilla pods as versatile handheld tripods.
This document discusses different types of cinematography equipment available at a school including tripods, sliders, jibs, drones, cameras, microphones, and lenses. It explains that ISO settings control how bright or dark video and images appear, with higher ISO numbers producing brighter footage. Gorilla pods are more versatile handheld tripods than normal tripods, jibs provide steady shots while the camera is mounted, and dollies allow for steady moving shots.
360 camera can record all sphere at the same time.
360 Camera or Ominidirctional camera is a camera or combination of camera with a 360-degree field of view in the horizontal plane, or with a visual field that covers (approximately) the entire sphere. Omnidirectional cameras are important in areas where large visual field coverage is needed, such as in panoramic photography and robotics.
360 camera captures light from all directions falling onto the focal point, covering a full sphere.
Each camera have slots which have different takes of snap at the same time.
Each video takes 48 frames per second.
The videos will be arranged to specific template pattern.
Using the Stitching method all videos are combined and form a new 360 degree video.
This document discusses techniques for capturing panoramic and stereoscopic panoramic images for virtual reality applications. It describes panoramas with monoscopic views that allow for head rotation but no stereo vision, stereoscopic movies that provide stereo vision but no head rotation, and stereo panoramas that enable both stereo vision and head rotation. Various camera rigs and techniques are presented for capturing omni-directional stereo panoramas, including the use of a single moving camera and approaches that approximate stereo from a single central camera position.
The document discusses learning frontend development for Android. It notes that with over 2 billion web users and 4 billion mobile users, combining web and mobile has huge potential. Android is highlighted as a good option, with over 81% of the mobile market share. Key facts about Android are provided, such as it being open source, having over 900 million activated devices, and receiving OTA updates. The document recommends learning Android through online tutorials, blogs, and Android's developer website for documentation and training.
This presentation is part of a series of Hints and Tips on Budget 360 degree video making. Part 1 of the presentation covers the reasons for choosing the Ricoh Theta S 360 camera and some illustrations of how this camera works
Joe Hill (Lucid VR): 3D VR Storytelling: When to Use 180° and 360° Degree Con...AugmentedWorldExpo
A talk from the Consumer Track at AWE USA 2017 - the largest conference for AR+VR in Santa Clara, California May 31- June 2, 2017.
Joe Hill (Lucid VR): 3D VR Storytelling: When to Use 180° and 360° Degree Content for VR Production
3D VR Storytelling is becoming more and more compelling whether for commercial or creative uses. As 3D VR content goes into higher demand, there’s lot of interest on how to best create and scale content. At Lucid VR, we’ve found that leveraging both 180 degree and 360 content can be strategic in crafting the right story that you’re trying to tell. In this session, we’ll look into case studies where 180 degree and 360 degree content is best used.
http://AugmentedWorldExpo.com
This document discusses various types of cinematography equipment used in schools including tripods, gorilla pods, dollies, sliders, jib arms, and different camera types. It also explains how ISO settings can be adjusted to brighten or darken video and images, with higher ISO numbers producing brighter footage. Key equipment mentioned are body mounted jibs for steady shots while mounted, dollies for moving shots with steady footage, and gorilla pods as versatile handheld tripods.
This document discusses different types of cinematography equipment available at a school including tripods, sliders, jibs, drones, cameras, microphones, and lenses. It explains that ISO settings control how bright or dark video and images appear, with higher ISO numbers producing brighter footage. Gorilla pods are more versatile handheld tripods than normal tripods, jibs provide steady shots while the camera is mounted, and dollies allow for steady moving shots.
360 camera can record all sphere at the same time.
360 Camera or Ominidirctional camera is a camera or combination of camera with a 360-degree field of view in the horizontal plane, or with a visual field that covers (approximately) the entire sphere. Omnidirectional cameras are important in areas where large visual field coverage is needed, such as in panoramic photography and robotics.
360 camera captures light from all directions falling onto the focal point, covering a full sphere.
Each camera have slots which have different takes of snap at the same time.
Each video takes 48 frames per second.
The videos will be arranged to specific template pattern.
Using the Stitching method all videos are combined and form a new 360 degree video.
This document discusses techniques for capturing panoramic and stereoscopic panoramic images for virtual reality applications. It describes panoramas with monoscopic views that allow for head rotation but no stereo vision, stereoscopic movies that provide stereo vision but no head rotation, and stereo panoramas that enable both stereo vision and head rotation. Various camera rigs and techniques are presented for capturing omni-directional stereo panoramas, including the use of a single moving camera and approaches that approximate stereo from a single central camera position.
360 video allows viewers to pan and rotate their perspective in all directions within a spherical video. Footage is captured using an omnidirectional camera or multiple cameras and stitched together to create a panoramic view. This immersive experience allows viewers to explore and interact with spaces, becoming part of the story. When combined with VR, 360 video can simulate real-life environments. Examples include a tour of the Large Hadron Collider, a 360 cockpit view, and a virtual tour of a cancer research lab.
Learn more about VR in The Complete VR Game Development Course: https://academy.zenva.com/product/the-complete-virtual-reality-game-development-course/
In this talk we covered the creation of immersive VR experiences in Unity. This included 360 photo and 360 video. We implemented gaze interaction using Unity VR Standard Assets (Reticle, VR Eye Raycaster).
I was invited to teach a workshop on 360 and VR180 video techniques and technology; explaining the equipment and demonstrating how it is used to better AR/VR applications in health, science, and medicine.
The 3rd Annual VR and Healthcare Symposium took place in Tucson, Arizona, March 7-8, 2019, it's the most extensive global gathering exploring the research, applications, and opportunities in virtual, augmented and mixed reality in healthcare.
How to Pick a VR Platform | Edward McNeillJessica Tams
The document provides guidance on choosing a VR platform to develop for. It analyzes the landscape of major VR platforms including Gear VR, Daydream, Oculus Rift, PlayStation VR, and HTC Vive. Each platform is evaluated based on performance, input options, positional tracking abilities, and developer experience. Business considerations are also discussed such as sales data for games on different platforms and launch details/expectations for the major headsets. The document aims to help developers understand the tradeoffs of each platform to determine the best fit for their VR game.
This document discusses 360-degree photography and video technologies. It begins with an agenda that covers understanding 360 cameras, live 360 video streaming, best practices for creating 360 tours, and emerging trends. Examples of 360 cameras are provided. Formats for 360 video and the stitching process needed to create panoramic images are explained. Tools for viewing 360 content on smartphones and creating virtual reality experiences are presented. The document concludes by introducing 3D photography technologies like light field and volumetric video.
Virtual reality and journalism via Google CardboardDetlef La Grand
This document outlines the history and development of virtual reality technology from early stereophotography experiments in the 1860s to modern VR headsets and platforms. It then describes how VRapp.co allows users to create, upload, and share virtual reality environments from 360 degree photos, videos, or CAD images for viewing on smartphones within affordable branded VR goggles. The goal is to use VR to (re)experience places like holidays, events, homes, or museums from a first-person perspective or confront fears without actual movement.
This document discusses using StorySpheres and PhotoSpheres in education. StorySpheres allow users to add sound, dialogue and music to 360-degree photos. PhotoSpheres capture panoramic images using smartphone apps or cameras. The document provides examples of how StorySpheres and PhotoSpheres could be used in biology and geology classes, such as simulating fieldwork, documenting difficult to access areas, and comparing riverbeds. Teachers are encouraged to integrate these virtual reality tools to enhance learning.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
More Related Content
Similar to Light Field Video for 6 Degrees-of-Freedom VR Playback
360 video allows viewers to pan and rotate their perspective in all directions within a spherical video. Footage is captured using an omnidirectional camera or multiple cameras and stitched together to create a panoramic view. This immersive experience allows viewers to explore and interact with spaces, becoming part of the story. When combined with VR, 360 video can simulate real-life environments. Examples include a tour of the Large Hadron Collider, a 360 cockpit view, and a virtual tour of a cancer research lab.
Learn more about VR in The Complete VR Game Development Course: https://academy.zenva.com/product/the-complete-virtual-reality-game-development-course/
In this talk we covered the creation of immersive VR experiences in Unity. This included 360 photo and 360 video. We implemented gaze interaction using Unity VR Standard Assets (Reticle, VR Eye Raycaster).
I was invited to teach a workshop on 360 and VR180 video techniques and technology; explaining the equipment and demonstrating how it is used to better AR/VR applications in health, science, and medicine.
The 3rd Annual VR and Healthcare Symposium took place in Tucson, Arizona, March 7-8, 2019, it's the most extensive global gathering exploring the research, applications, and opportunities in virtual, augmented and mixed reality in healthcare.
How to Pick a VR Platform | Edward McNeillJessica Tams
The document provides guidance on choosing a VR platform to develop for. It analyzes the landscape of major VR platforms including Gear VR, Daydream, Oculus Rift, PlayStation VR, and HTC Vive. Each platform is evaluated based on performance, input options, positional tracking abilities, and developer experience. Business considerations are also discussed such as sales data for games on different platforms and launch details/expectations for the major headsets. The document aims to help developers understand the tradeoffs of each platform to determine the best fit for their VR game.
This document discusses 360-degree photography and video technologies. It begins with an agenda that covers understanding 360 cameras, live 360 video streaming, best practices for creating 360 tours, and emerging trends. Examples of 360 cameras are provided. Formats for 360 video and the stitching process needed to create panoramic images are explained. Tools for viewing 360 content on smartphones and creating virtual reality experiences are presented. The document concludes by introducing 3D photography technologies like light field and volumetric video.
Virtual reality and journalism via Google CardboardDetlef La Grand
This document outlines the history and development of virtual reality technology from early stereophotography experiments in the 1860s to modern VR headsets and platforms. It then describes how VRapp.co allows users to create, upload, and share virtual reality environments from 360 degree photos, videos, or CAD images for viewing on smartphones within affordable branded VR goggles. The goal is to use VR to (re)experience places like holidays, events, homes, or museums from a first-person perspective or confront fears without actual movement.
This document discusses using StorySpheres and PhotoSpheres in education. StorySpheres allow users to add sound, dialogue and music to 360-degree photos. PhotoSpheres capture panoramic images using smartphone apps or cameras. The document provides examples of how StorySpheres and PhotoSpheres could be used in biology and geology classes, such as simulating fieldwork, documenting difficult to access areas, and comparing riverbeds. Teachers are encouraged to integrate these virtual reality tools to enhance learning.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
40. Light Field Video Mastering
● Viewing volume and data size
● Compression
41. Light Field Video Playback
● Position tracking
● View synthesis
● Performance
42.
43. ● True Presence is more than 360 stereo.
● Light Field volumetric video is a huge step forward that combines motion
parallax and photo-realism.
● The Holodeck is near us.
Conclusions
Editor's Notes
I will first attempt to define the true presence, and then the Light Field video approach spearheaded by Lytro, along with the technical challenges.
The true promise of Virtual Reality is Presence, taking you to a completely different time and/or different space. It does that by recreating all sensory inputs that matches that time and space.
The first thing in presence is 360-degree immersive visual perception.
There are numerous camera rigs that have been released to capture 360-degree video, but what they capture is simply 2D panorama, which means there is no sense of depth.
The second requirement for Presence is stereoscopic vision, giving you slightly different view for each eye, thus generate the sense of depth.
The first group of cameras rigs captures stereoscopic visual data only for certain directions, and not all 360 directions.
Then the second group of camera rigs capture video for all directions, and use panoramic stitching techniques to generate stereoscopic vision for all 360 degrees. However, as soon as you look up or down, you gradually lose the stereo vision. That’s why I call this Horizontal 360 stereo.
Then Omni-directional stereo would give you stereoscopic vision for all 360-180 directions. For example, this is an anaglyph of a rendered omni-directional stereo image. No cameras in the market today can capture omni-directional stereo video, yet.
Because we humans likes to move, motion parallax is another important depth-cue, which means objects at different distances move at different rate as you move your head.
The way Lytro solves this problem is to capture a Light Field Volume.
This diagram shows Omni-Directional stereo, with blue light-rays coming into your left eye, and red light-rays coming into your right-eye. As both eyes rotate around the center of perspective, the light-rays generate stereoscopic vision for all 360-degree directions. However, as soon as you move away from the center of perspective, you don’t have any captured video information to render for the new perspective.
A Light Field volume is a volumetric video that has all the light-rays in order to recreate the perspective at any point inside the spherical volume. This is done by capturing a sampling of all the light-rays entering the spherical surface of the volume.
In comparison, you can think of the Light Field volume as enabling Omni-Directional Stereo for any point inside the spherical volume.
This is what we refer to as 6-degrees of freedom inside the Light Field Volume.
This is an example of the Light Field volumetric video produced by Lytro, called Moon. The white sphere you see here is the volume, about 0.6 meters in diameter. As long as you are looking from inside the sphere, you can experience complete motion parallax and view dependent lighting.
Then there is Convergence and Accommodation.
It means when we look into a VR headset, both of our eyes always focus on the imaging plane, rather than focusing on the object at the right distance. This causes sensory conflicts and thus fatigue after short period of time. I won’t get into much details into this.
Then finally, what you see needs to be photo real. To me, this means view-dependent lighting, reflection, and shadows. It not only needs to be real, but it needs to be photo-real.
Like this rendered video of 964 Carrera created by Director Djordje Ilic. It looks super real, because it has all the imperfections just at the right place.
In these two axis of immersion and photo-realism, Games Engines does real-time graphics rendering based on head-position tracking, it gives you motion parallax, but with only milliseconds of rendering time, it cannot give you the high-quality photo-realism. On the other hand, the 360-degree videos captures high-quality panorama, but fails to provide motion parallax. The Lytro Light Field volumetric video approach provides both motion parallax and high-quality photoreal imaging experiences.
I will now run through our Light Field volumetric video pipeline and some of the technical challenges.
This is the Stanford plenoptic camera array that was used in their Light Field research in 2004. It is not what we are using today, but the Lytro technology originates from the Light Field research done at Stanford during that time. The challenges though are still similar, for example, synchronizing cameras, capturing video at high frame-rate, and handling huge amount of data.
Light Field processing is the core the pipeline, where the captured camera data is calibrated, color-matched, and processed to reconstruct the 3D scene for the viewing volume. This shows an example of the processed depth-map from one of our captured scenes.
When your camera rig does not capture 360-degree FOV all at once, you need to merge Light Field volumes. It’s a relatively painless process.
Let’s say your camera rig captures a limited field-of-view and generates a spherical volume for that field-of-view.
Using the same camera rig, you can then capture another area of the scene, with only slight overlap between this and the previous captured field-of-view.
You can then merge the two captured data into the same scene, resulting the same viewing volume. A couple things to be careful with here, first, you need to ensure identical lighting between the two captures; second, you need to calibrate for the two camera positions, using some calibration workflow. Once you do these, you can even capture action in different directions at different time, and have them show up in the same 360-degree frame.
Post production for volumetric video is very challenging, in terms of adding visual-effects for the 3D geometry, and ensuring consistent view-dependent lighting and color, across the viewing volume.
For example, the Moon video was captured in the historic Mack Sennett soundstage in LA. As you can see, there is only a small patch of the Moon surface and the ladder of the Moon lander, in the scene.
The floor is then blended with computer-generated moon surface.
A computer generated moon lander got inserted.
Then the Earth was inserted into the background.
Then changed the ceiling. These all look easy and simple right? But, the challenge is to do this for the entire viewing volume, which requires the development of a whole new suite of post-production tools.
It is no surprise that we are dealing with a large volume of data. The volumetric video data size is cubic relative to the diameter of the viewing volume. Hence we need tools for propagating visual-effect change spatially as well as temporally.
One of the things we realized in the process was that the captured volume may not be the same as the Playback volume.
For example, this shows a heat-map of the locations inside the viewing volume that are frequently visited for this particular piece of content. Red dots mean highly-visited positions, and green-dots mean less-frequently visited. We can take this information, and optimize the size and shape of the volume for distribution.
This process is called Mastering. Here, the grey sphere represents the captured Light Field volume, and the golden area inside the sphere means the actual volume you package and ship for playback. For instance, you may choose to release a smaller volume to reduce the video size; or in the second case, you may choose to release a volume with maximal horizontal motion parallax.
During playback, the Lytro application tracks the precise positions of both eyes, and uses the volumetric video data, to render a 360-degree view of the scene for the perspective of each eye positions. This is done in real-time, and hence performance is supercritical.
In summary, the true presence promised by Virtual Reality is a lot more than just 360-degree panoramic video. Lytro’s Light Field volumetric video approach is a huge step forward that combines motion parallax and high-quality photo-real imaging. There are still a number of challenges ahead of us, but the vision of the Holodeck is already very near. Thank you!