Transforming the way we deal with information - from consumption to interaction.
iMinds insights is a quarterly publication providing you with relevant tech updates based on interviews with academic and industry experts. iMinds is a digital research center and incubator based in Belgium.
IRJET- 3D Hologram Virtual Personal Assistant using Cloud Services : A SurveyIRJET Journal
This document discusses adding hologram technology to create a 3D holographic virtual personal assistant using cloud services. Key points:
- The assistant would use hologram projection and OLED displays to make it appear that a real person is talking to the user.
- Amazon Web Services and Amazon Sumerian would be used to model the design and ensure secure data transmission.
- The holographic projections would be projected into mid-air without the need for glasses, allowing for a more realistic 3D experience.
- This type of virtual assistant could help manage tasks through voice commands while incorporating cutting-edge hologram technology for an improved user experience.
This document provides an outline on augmented reality (AR). It begins by defining AR as combining real and virtual objects where the virtual objects are interactive and registered in 3D. Next, it discusses motivations for AR including enhancing perception and assisting with tasks. Examples of medical, manufacturing, annotation, and entertainment applications are then described. Key characteristics of AR systems like blending real and virtual objects, and issues of focus, contrast and registration are also covered. The document concludes by noting that while AR is still developing, it has potential for improved interaction with real and virtual worlds.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/qualcomm/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-mangen
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Michael Mangen, Product Manager for Camera and Computer Vision at Qualcomm, presents the "High-resolution 3D Reconstruction on a Mobile Processor" tutorial at the May 2016 Embedded Vision Summit.
Computer vision has come a long way. Use cases that were previously not possible in mass-market devices are now more accessible thanks to advances in depth sensors and mobile processors. In this presentation, Mangen provides an overview of how we are able to implement high-resolution 3D reconstruction – a capability typically requiring cloud/server processing – on a mobile processor. This is an exciting example of how new sensor technology and advanced mobile processors are bringing computer vision capabilities to broader markets.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Kni...AugmentedWorldExpo
The document discusses 3D imaging technologies and their applications. It provides an overview of different 3D sensing techniques such as stereo vision, structured light, and time-of-flight. Time-of-flight allows for compact solutions and flexible modes. The document outlines the growth of 3D markets from mobile to augmented reality. It argues that time-of-flight will be important for applications like mobile AR as it enables gesture control through depth sensing.
This document describes a proposed system for implementing 3D video calling using holographic views. The system would use a prism placed on the receiving user's smartphone screen to generate a 3D holographic view of the calling user. It works by taking the 2D video frames from the call, processing them using color masking and perspective filtering techniques as part of a 2D to 3D conversion system, and then displaying modified frames on each side of the prism screen to create the illusion of a 3D hologram when the light rays refract through the prism. The system aims to provide a more realistic communication experience compared to traditional 2D video calls. Future work may include modifying the system to allow two-
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze how Light Field Technology is becoming economic feasible for an increasing number of applications. Light Field Cameras record all of the light fields in a picture instead of just one light field. This capability enables users to change the focus of pictures after they have been taken and to more easily record 3D data. These features are becoming economically feasible improvements because of rapid improvements in camera chips and micro-lens arrays (an example of micro-electronic mechanical systems, MEMS). These features offer alternative ways to do 3D sensing for automated vehicles and augmented reality and can enable faster data collection with telescopes.
Iirdem screen less displays – the imminent vanguardIaetsd Iaetsd
This document discusses screenless display technology, which involves displaying information without using a physical screen. It describes three main types of screenless displays: visual images like holograms, retinal displays that project images directly onto the retina, and synaptic interfaces that transmit visual information directly to the brain. The document outlines the working principles and potential applications of screenless displays, such as in virtual reality systems. It also discusses the structure and implementation of retinal displays specifically.
This document discusses various applications of 3D technology across different industries including games, movies/TV, animations, medicine, education, architecture and more. It provides examples of how 3D graphics have evolved over time in games, from early 3D games like 3D Monster Maze to modern games with photorealistic graphics. It also discusses the use of CGI in movies/TV to create visual effects that could not be done practically. The document further explains 3D modeling techniques, the graphics pipeline, and 3D development software tools like 3ds Max, Maya and CINEMA 4D.
IRJET- 3D Hologram Virtual Personal Assistant using Cloud Services : A SurveyIRJET Journal
This document discusses adding hologram technology to create a 3D holographic virtual personal assistant using cloud services. Key points:
- The assistant would use hologram projection and OLED displays to make it appear that a real person is talking to the user.
- Amazon Web Services and Amazon Sumerian would be used to model the design and ensure secure data transmission.
- The holographic projections would be projected into mid-air without the need for glasses, allowing for a more realistic 3D experience.
- This type of virtual assistant could help manage tasks through voice commands while incorporating cutting-edge hologram technology for an improved user experience.
This document provides an outline on augmented reality (AR). It begins by defining AR as combining real and virtual objects where the virtual objects are interactive and registered in 3D. Next, it discusses motivations for AR including enhancing perception and assisting with tasks. Examples of medical, manufacturing, annotation, and entertainment applications are then described. Key characteristics of AR systems like blending real and virtual objects, and issues of focus, contrast and registration are also covered. The document concludes by noting that while AR is still developing, it has potential for improved interaction with real and virtual worlds.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/qualcomm/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit-mangen
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Michael Mangen, Product Manager for Camera and Computer Vision at Qualcomm, presents the "High-resolution 3D Reconstruction on a Mobile Processor" tutorial at the May 2016 Embedded Vision Summit.
Computer vision has come a long way. Use cases that were previously not possible in mass-market devices are now more accessible thanks to advances in depth sensors and mobile processors. In this presentation, Mangen provides an overview of how we are able to implement high-resolution 3D reconstruction – a capability typically requiring cloud/server processing – on a mobile processor. This is an exciting example of how new sensor technology and advanced mobile processors are bringing computer vision capabilities to broader markets.
Mitchell Reifel (pmdtechnologies ag): pmd Time-of-Flight – the Swiss Army Kni...AugmentedWorldExpo
The document discusses 3D imaging technologies and their applications. It provides an overview of different 3D sensing techniques such as stereo vision, structured light, and time-of-flight. Time-of-flight allows for compact solutions and flexible modes. The document outlines the growth of 3D markets from mobile to augmented reality. It argues that time-of-flight will be important for applications like mobile AR as it enables gesture control through depth sensing.
This document describes a proposed system for implementing 3D video calling using holographic views. The system would use a prism placed on the receiving user's smartphone screen to generate a 3D holographic view of the calling user. It works by taking the 2D video frames from the call, processing them using color masking and perspective filtering techniques as part of a 2D to 3D conversion system, and then displaying modified frames on each side of the prism screen to create the illusion of a 3D hologram when the light rays refract through the prism. The system aims to provide a more realistic communication experience compared to traditional 2D video calls. Future work may include modifying the system to allow two-
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze how Light Field Technology is becoming economic feasible for an increasing number of applications. Light Field Cameras record all of the light fields in a picture instead of just one light field. This capability enables users to change the focus of pictures after they have been taken and to more easily record 3D data. These features are becoming economically feasible improvements because of rapid improvements in camera chips and micro-lens arrays (an example of micro-electronic mechanical systems, MEMS). These features offer alternative ways to do 3D sensing for automated vehicles and augmented reality and can enable faster data collection with telescopes.
Iirdem screen less displays – the imminent vanguardIaetsd Iaetsd
This document discusses screenless display technology, which involves displaying information without using a physical screen. It describes three main types of screenless displays: visual images like holograms, retinal displays that project images directly onto the retina, and synaptic interfaces that transmit visual information directly to the brain. The document outlines the working principles and potential applications of screenless displays, such as in virtual reality systems. It also discusses the structure and implementation of retinal displays specifically.
This document discusses various applications of 3D technology across different industries including games, movies/TV, animations, medicine, education, architecture and more. It provides examples of how 3D graphics have evolved over time in games, from early 3D games like 3D Monster Maze to modern games with photorealistic graphics. It also discusses the use of CGI in movies/TV to create visual effects that could not be done practically. The document further explains 3D modeling techniques, the graphics pipeline, and 3D development software tools like 3ds Max, Maya and CINEMA 4D.
Invited talk on AR/SLAM and IoT in ILAS Seminar :Introduction to IoT and
Security, Kyoto University, 2020.
(https://www.z.k.kyoto-u.ac.jp/freshman-guide/ilas-seminars/ )
◆登壇者: Tomoyuki Mukasa
Virtual retinal display (VRD) is a novel display technology that scans light directly onto the retina to create high resolution images without the need for a screen. It works by using light-emitting diodes or lasers to project pixelated images in raster patterns onto the retina. Key advantages of VRDs include their small size, high resolution, wide field of view, and potential applications in areas like medicine, aviation, and augmented reality. Future developments may make VRDs more compact and affordable for widespread use.
IRJET - 3D Virtual Dressing Room ApplicationIRJET Journal
This document describes a 3D virtual dressing room application that was developed using the Microsoft Kinect sensor. The application aims to address issues with traditional dressing rooms like wasting time trying on clothes, limited variety, and privacy concerns. The proposed approach uses Kinect to extract the user from the video stream, align 3D cloth models to the user's body, and apply skin color detection to handle occlusions. The body joints are used for positioning, scaling, and rotating the cloth models. The models are then overlaid on the user in real-time. The document discusses related work on virtual dressing rooms and 3D alignment of clothes to user models. It also outlines the methodology, including using Kinect's image and depth streams to develop
Augmented Reality and Virtual Reality: Research Advances in Creative IndustryZi Siang See
This research seminar presents and discusses recent advances in augmented reality (AR) and virtual reality (VR). Recent developments have been implied that AR and VR innovation is more broadly accessible for different expert areas, these include the information technology sectors, education operations, build environment and the creative industries. During the session, participants will be able to experience AR and VR using mobile and head mount devices (HMD). This research talk will provide an overview of AR and VR interface development, industrial use cases and research direction.
The document introduces two tools for interacting with digital information in 3D space:
1. Beyond allows users to directly manipulate 3D digital objects using physically retractable tools that project into the digital space beyond the screen. This breaks down barriers between physical and digital worlds.
2. SpaceTop extends the desktop interface by adding a transparent display above the keyboard, allowing users to reach hands into this 3D digital workspace to directly grab and manipulate floating windows and files as if they were physical objects. This seamlessly integrates 2D and 3D interactions.
Generation of planar radiographs from 3D anatomical models using the GPUthyandrecardoso
This document discusses a project to generate digitally reconstructed radiographs (DRRs) from 3D anatomical models using GPU processing. DRR generation is currently a bottleneck in medical applications like 2D/3D registration for scoliosis evaluation. The project aims to develop fast DRR algorithms by implementing single-pass ray casting on the GPU using CUDA or OpenGL shading language. This could improve performance enough for clinical use. The current solution uses ray casting and depth peeling on the GPU. Work plans to modify this to the reported faster single-pass approach and compare CUDA and GLSL implementations. Faster DRRs are expected to enable new medical applications.
This document discusses the history and technology of digital images. It begins by defining pixels and describing the two main types of digital images: black and white and color. It then provides a timeline of key developments in digital photography from the 1950s to the 2000s. These developments include the first digital camera in 1975 and the introduction of megapixel sensors and digital cameras for consumers in the 1990s. The document also describes the two main image sensors, CMOS and CCD, and provides examples of how digital image technology is used in fields like medicine, security, and the military. It focuses on the use of infrared technology in construction for detecting moisture, describing how infrared cameras work and the advantages and disadvantages of this approach.
This document discusses using 3D printing to create custom optical elements, called "Printed Optics", that can be embedded in interactive devices. It presents four categories of Printed Optics fabrication techniques: 1) Light Pipes that can guide light through devices for display purposes, 2) Internal Illumination techniques using hollow spaces, 3) Sensing user input by tracking mechanical movements of printed elements, and 4) Embedding optoelectronic components. The document outlines examples like using light pipes in a mobile projector accessory or sensing touch inputs on a device's surface. Overall, it explores how 3D printing optical elements opens up new possibilities for interaction design.
Virtual Reality: Stereoscopic Imaging for Educational Institutions Rodrigo Arnaut
Virtual reality (VR) in education is a markedly present subject in research institutions in many countries. This paper will discuss the application of VR techniques, including the use of computer graphics and three-dimensional (3D) video production. Stereoscopy is a key point for the visualization of these applications. The system developed uses a 3D lens, a home camera, common video edition software, two low cost projectors, light polarized filters and cheap 3D eyeglasses. During the 3D video production, the aim was to evaluate all the involved process, since the elaboration of scripts, video capture and projection until the costs to build the system. This is important to demonstrate for educational institutions the advantages in adopting resources of VR for the improvement of learning.
Artigo completo em: http://periodicos.ifsc.edu.br/index.php/rtc/article/view/108
Virtual reality (VR) is a technology that allows users to interact with computer-simulated environments, whether real or imagined. Some key developments in VR history include Morton Heilig creating a multi-sensory simulator in 1962, the first computer-generated movie in 1982, and the rise of VR gaming in the late 1990s. VR has applications in fields such as medicine, engineering, education, and entertainment. While VR offers benefits for interaction and visualization, challenges remain regarding usability, side effects, and a lack of standardization.
Virtual reality (VR) allows users to interact with simulated 3D environments through specialized computer hardware and software. This document describes the process of creating VR simulations of the human skull and heart using medical imaging data. CT and MRI scans were converted into 3D models and rendered into stereo image sequences. These sequences could be viewed through VR headsets or monitors to provide an interactive educational experience, allowing students to fly through virtual tours of the human body. While high resolution VR is possible, limitations remain in hardware and software. Researchers continue working to improve VR technology for medical education applications.
Antti Sunnari (Dispelix Ltd): Full-color, single-waveguide near-eye displays ...AugmentedWorldExpo
A talk from the XR Enablement Track at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Antti Sunnari (Dispelix Ltd): Full-color, single-waveguide near-eye displays for AR glasses and MR headsets
For years, many have tried to develop a see-through, near-eye display technology that combines beautiful design, excellent image quality and scalable mass production with high yields. At Dispelix, we’ve figured it out. Dispelix is helping product companies to create beautiful AR glasses based on single-waveguide full-color displays with superior image quality and mass manufacturability.
https://awexr.com
Ramesh Raskar discusses his research vision for computational photography and cameras of the future. He envisions cameras that can understand scenes at a higher level than humans by producing meaningful abstractions from vast amounts of visual data. His work includes techniques like looking around corners using transient imaging, long distance barcodes, light field displays, and augmented reality. The goal is to advance both imaging hardware and computational algorithms to create an "ultimate camera" beyond the limitations of traditional photography.
From AR to Augmented {City and Citizen}Woontack Woo
The document discusses the potential for augmented reality (AR), virtual reality (VR), and mixed reality (XR) technologies to benefit humanity. It describes how XR technologies could be used to augment both cities and citizens by expanding human abilities and improving people's lives. The author argues that XR, when combined with other technologies like artificial intelligence, IoT, 5G networks, and digital twins, could transform cities into "Augmented Cities" and empower citizens to become "Augmented Citizens" or "Augmented Smartizens." However, care must be taken to ensure technology enhances rather than replaces human experiences and relationships. The overall goal is to leverage XR for good by using it to benefit people's bodies, minds, and social
This document summarizes research on compressive light field displays. It begins by introducing the concept of light field displays and their collaborators. It then provides examples of prototypes from layered 3D displays to tensor displays. Finally, it outlines the evolution of display technologies from conventional parallax barriers to the most recent compressive displays that use techniques like nonnegative matrix factorization and computed tomography to achieve compression in time, pixels, and depth for glasses-free 3D viewing.
IN140703 service support technologies 6.10.2016Pirita Ihamäki
6.10.2016 Service support technologies first part go through Virtual Reality, Virtual Prototyping, Component of a Virtual Prototype, Top 5 Virtual Reality Gadget of the Future, Virtual Market Potential.
The document discusses the history and components of digital cameras. It begins with early pinhole cameras and progresses to modern digital cameras. Key developments include the introduction of film which was more durable than earlier techniques, the advent of single-lens reflex cameras which allowed through-the-lens viewing, and the transition to digital sensors. The core components of a digital camera like the lens, shutter, and sensor are explained. CMOS sensors are highlighted as the current standard, allowing camera functionality to be integrated onto a single chip.
UAV-Borne LiDAR with MEMS Mirror Based Scanning Capability Ping Hsu
Firstly, we demonstrated a wirelessly controlled MEMS scan module with imaging and laser tracking capability which can be mounted and flown on a small UAV quadcopter. The MEMS scan module was reduced down to a small volume of <90mm><70mm><50g when powered by the UAV‟s battery. The MEMS mirror based LiDAR system allows for ondemand ranging of points or areas within the FoR without altering the UAV‟s position. Increasing the LRF ranging frequency and stabilizing the pointing of the laser beam by utilizing the onboard inertial sensors and the camera are additional goals of the next design. Keywords: MEMS Mirrors, laser tracking, laser imaging, laser range finder, UAV, drone, LiDAR.
Supply chain director performance appraisaldeant0017
This document contains information related to performance evaluation forms and methods for evaluating the performance of a supply chain director. It includes a sample performance evaluation form with sections for reviewing performance factors, employee strengths and accomplishments, performance areas needing improvement, and signatures. It also provides examples of performance review phrases for evaluating different skills and examples of the top 12 methods for performance appraisal, including management by objectives, critical incident method, behaviorally anchored rating scales, and 360 degree feedback. The document aims to provide human resources professionals with resources and templates for formally evaluating the performance of supply chain directors.
This document provides information and resources for evaluating the performance of a data coordinator. It includes a 4-page sample performance evaluation form with rating scales for evaluating an employee on various performance factors. It also lists phrases that can be used in a performance review for areas like attitude, creativity, decision-making, interpersonal skills, and teamwork. Finally, it outlines the top 12 methods that can be used for performance appraisal, such as management by objectives, critical incident method, behaviorally anchored rating scales, and 360-degree feedback. The goal is to help managers formally assess a data coordinator's work quality and provide constructive feedback for improvement.
Invited talk on AR/SLAM and IoT in ILAS Seminar :Introduction to IoT and
Security, Kyoto University, 2020.
(https://www.z.k.kyoto-u.ac.jp/freshman-guide/ilas-seminars/ )
◆登壇者: Tomoyuki Mukasa
Virtual retinal display (VRD) is a novel display technology that scans light directly onto the retina to create high resolution images without the need for a screen. It works by using light-emitting diodes or lasers to project pixelated images in raster patterns onto the retina. Key advantages of VRDs include their small size, high resolution, wide field of view, and potential applications in areas like medicine, aviation, and augmented reality. Future developments may make VRDs more compact and affordable for widespread use.
IRJET - 3D Virtual Dressing Room ApplicationIRJET Journal
This document describes a 3D virtual dressing room application that was developed using the Microsoft Kinect sensor. The application aims to address issues with traditional dressing rooms like wasting time trying on clothes, limited variety, and privacy concerns. The proposed approach uses Kinect to extract the user from the video stream, align 3D cloth models to the user's body, and apply skin color detection to handle occlusions. The body joints are used for positioning, scaling, and rotating the cloth models. The models are then overlaid on the user in real-time. The document discusses related work on virtual dressing rooms and 3D alignment of clothes to user models. It also outlines the methodology, including using Kinect's image and depth streams to develop
Augmented Reality and Virtual Reality: Research Advances in Creative IndustryZi Siang See
This research seminar presents and discusses recent advances in augmented reality (AR) and virtual reality (VR). Recent developments have been implied that AR and VR innovation is more broadly accessible for different expert areas, these include the information technology sectors, education operations, build environment and the creative industries. During the session, participants will be able to experience AR and VR using mobile and head mount devices (HMD). This research talk will provide an overview of AR and VR interface development, industrial use cases and research direction.
The document introduces two tools for interacting with digital information in 3D space:
1. Beyond allows users to directly manipulate 3D digital objects using physically retractable tools that project into the digital space beyond the screen. This breaks down barriers between physical and digital worlds.
2. SpaceTop extends the desktop interface by adding a transparent display above the keyboard, allowing users to reach hands into this 3D digital workspace to directly grab and manipulate floating windows and files as if they were physical objects. This seamlessly integrates 2D and 3D interactions.
Generation of planar radiographs from 3D anatomical models using the GPUthyandrecardoso
This document discusses a project to generate digitally reconstructed radiographs (DRRs) from 3D anatomical models using GPU processing. DRR generation is currently a bottleneck in medical applications like 2D/3D registration for scoliosis evaluation. The project aims to develop fast DRR algorithms by implementing single-pass ray casting on the GPU using CUDA or OpenGL shading language. This could improve performance enough for clinical use. The current solution uses ray casting and depth peeling on the GPU. Work plans to modify this to the reported faster single-pass approach and compare CUDA and GLSL implementations. Faster DRRs are expected to enable new medical applications.
This document discusses the history and technology of digital images. It begins by defining pixels and describing the two main types of digital images: black and white and color. It then provides a timeline of key developments in digital photography from the 1950s to the 2000s. These developments include the first digital camera in 1975 and the introduction of megapixel sensors and digital cameras for consumers in the 1990s. The document also describes the two main image sensors, CMOS and CCD, and provides examples of how digital image technology is used in fields like medicine, security, and the military. It focuses on the use of infrared technology in construction for detecting moisture, describing how infrared cameras work and the advantages and disadvantages of this approach.
This document discusses using 3D printing to create custom optical elements, called "Printed Optics", that can be embedded in interactive devices. It presents four categories of Printed Optics fabrication techniques: 1) Light Pipes that can guide light through devices for display purposes, 2) Internal Illumination techniques using hollow spaces, 3) Sensing user input by tracking mechanical movements of printed elements, and 4) Embedding optoelectronic components. The document outlines examples like using light pipes in a mobile projector accessory or sensing touch inputs on a device's surface. Overall, it explores how 3D printing optical elements opens up new possibilities for interaction design.
Virtual Reality: Stereoscopic Imaging for Educational Institutions Rodrigo Arnaut
Virtual reality (VR) in education is a markedly present subject in research institutions in many countries. This paper will discuss the application of VR techniques, including the use of computer graphics and three-dimensional (3D) video production. Stereoscopy is a key point for the visualization of these applications. The system developed uses a 3D lens, a home camera, common video edition software, two low cost projectors, light polarized filters and cheap 3D eyeglasses. During the 3D video production, the aim was to evaluate all the involved process, since the elaboration of scripts, video capture and projection until the costs to build the system. This is important to demonstrate for educational institutions the advantages in adopting resources of VR for the improvement of learning.
Artigo completo em: http://periodicos.ifsc.edu.br/index.php/rtc/article/view/108
Virtual reality (VR) is a technology that allows users to interact with computer-simulated environments, whether real or imagined. Some key developments in VR history include Morton Heilig creating a multi-sensory simulator in 1962, the first computer-generated movie in 1982, and the rise of VR gaming in the late 1990s. VR has applications in fields such as medicine, engineering, education, and entertainment. While VR offers benefits for interaction and visualization, challenges remain regarding usability, side effects, and a lack of standardization.
Virtual reality (VR) allows users to interact with simulated 3D environments through specialized computer hardware and software. This document describes the process of creating VR simulations of the human skull and heart using medical imaging data. CT and MRI scans were converted into 3D models and rendered into stereo image sequences. These sequences could be viewed through VR headsets or monitors to provide an interactive educational experience, allowing students to fly through virtual tours of the human body. While high resolution VR is possible, limitations remain in hardware and software. Researchers continue working to improve VR technology for medical education applications.
Antti Sunnari (Dispelix Ltd): Full-color, single-waveguide near-eye displays ...AugmentedWorldExpo
A talk from the XR Enablement Track at AWE USA 2019 - the World's #1 XR Conference & Expo in Santa Clara, California May 29-31, 2019.
Antti Sunnari (Dispelix Ltd): Full-color, single-waveguide near-eye displays for AR glasses and MR headsets
For years, many have tried to develop a see-through, near-eye display technology that combines beautiful design, excellent image quality and scalable mass production with high yields. At Dispelix, we’ve figured it out. Dispelix is helping product companies to create beautiful AR glasses based on single-waveguide full-color displays with superior image quality and mass manufacturability.
https://awexr.com
Ramesh Raskar discusses his research vision for computational photography and cameras of the future. He envisions cameras that can understand scenes at a higher level than humans by producing meaningful abstractions from vast amounts of visual data. His work includes techniques like looking around corners using transient imaging, long distance barcodes, light field displays, and augmented reality. The goal is to advance both imaging hardware and computational algorithms to create an "ultimate camera" beyond the limitations of traditional photography.
From AR to Augmented {City and Citizen}Woontack Woo
The document discusses the potential for augmented reality (AR), virtual reality (VR), and mixed reality (XR) technologies to benefit humanity. It describes how XR technologies could be used to augment both cities and citizens by expanding human abilities and improving people's lives. The author argues that XR, when combined with other technologies like artificial intelligence, IoT, 5G networks, and digital twins, could transform cities into "Augmented Cities" and empower citizens to become "Augmented Citizens" or "Augmented Smartizens." However, care must be taken to ensure technology enhances rather than replaces human experiences and relationships. The overall goal is to leverage XR for good by using it to benefit people's bodies, minds, and social
This document summarizes research on compressive light field displays. It begins by introducing the concept of light field displays and their collaborators. It then provides examples of prototypes from layered 3D displays to tensor displays. Finally, it outlines the evolution of display technologies from conventional parallax barriers to the most recent compressive displays that use techniques like nonnegative matrix factorization and computed tomography to achieve compression in time, pixels, and depth for glasses-free 3D viewing.
IN140703 service support technologies 6.10.2016Pirita Ihamäki
6.10.2016 Service support technologies first part go through Virtual Reality, Virtual Prototyping, Component of a Virtual Prototype, Top 5 Virtual Reality Gadget of the Future, Virtual Market Potential.
The document discusses the history and components of digital cameras. It begins with early pinhole cameras and progresses to modern digital cameras. Key developments include the introduction of film which was more durable than earlier techniques, the advent of single-lens reflex cameras which allowed through-the-lens viewing, and the transition to digital sensors. The core components of a digital camera like the lens, shutter, and sensor are explained. CMOS sensors are highlighted as the current standard, allowing camera functionality to be integrated onto a single chip.
UAV-Borne LiDAR with MEMS Mirror Based Scanning Capability Ping Hsu
Firstly, we demonstrated a wirelessly controlled MEMS scan module with imaging and laser tracking capability which can be mounted and flown on a small UAV quadcopter. The MEMS scan module was reduced down to a small volume of <90mm><70mm><50g when powered by the UAV‟s battery. The MEMS mirror based LiDAR system allows for ondemand ranging of points or areas within the FoR without altering the UAV‟s position. Increasing the LRF ranging frequency and stabilizing the pointing of the laser beam by utilizing the onboard inertial sensors and the camera are additional goals of the next design. Keywords: MEMS Mirrors, laser tracking, laser imaging, laser range finder, UAV, drone, LiDAR.
Supply chain director performance appraisaldeant0017
This document contains information related to performance evaluation forms and methods for evaluating the performance of a supply chain director. It includes a sample performance evaluation form with sections for reviewing performance factors, employee strengths and accomplishments, performance areas needing improvement, and signatures. It also provides examples of performance review phrases for evaluating different skills and examples of the top 12 methods for performance appraisal, including management by objectives, critical incident method, behaviorally anchored rating scales, and 360 degree feedback. The document aims to provide human resources professionals with resources and templates for formally evaluating the performance of supply chain directors.
This document provides information and resources for evaluating the performance of a data coordinator. It includes a 4-page sample performance evaluation form with rating scales for evaluating an employee on various performance factors. It also lists phrases that can be used in a performance review for areas like attitude, creativity, decision-making, interpersonal skills, and teamwork. Finally, it outlines the top 12 methods that can be used for performance appraisal, such as management by objectives, critical incident method, behaviorally anchored rating scales, and 360-degree feedback. The goal is to help managers formally assess a data coordinator's work quality and provide constructive feedback for improvement.
The Internet of Things will radically transform the ways we interact with our world and control our surroundings.
iMinds insights is a quarterly publication providing you with relevant tech updates based on interviews with academic and industry experts. iMinds is a digital research center and incubator based in Belgium.
The document discusses the results of a study on the impact of COVID-19 lockdowns on air pollution. Researchers found that lockdowns led to significant short-term reductions in nitrogen dioxide and fine particulate matter pollution globally as economic activities slowed. However, the impacts on greenhouse gases and long-term air quality improvements remain uncertain without permanent behavior and economic changes.
This document provides information and resources for evaluating the performance of a conference coordinator. It includes a 4-page sample performance evaluation form with rating scales for evaluating an employee on various factors such as administration, communication, teamwork, decision-making, and customer service. It also gives examples of positive and negative phrases that can be used in a performance review for areas like attitude, creativity, problem-solving, and interpersonal skills. Finally, it lists the top 12 methods for performance appraisal that can be used, such as management by objectives, critical incident, behaviorally anchored rating scales, and 360-degree feedback. The resources aim to help managers formally assess and provide feedback to conference coordinators on their work.
High school athletic director performance appraisaldeant0017
High school athletic director job description,High school athletic director goals & objectives,High school athletic director KPIs & KRAs,High school athletic director self appraisal
This document provides information and resources for evaluating the performance of a musical director, including:
1. A 4-page job performance evaluation form for a musical director with ratings in various performance areas like administration, communication, teamwork, and customer service.
2. Examples of positive and negative performance review phrases for a musical director tailored to criteria like attitude, creativity, decision-making, and interpersonal skills.
3. An overview of the top 12 methods for musical director performance appraisal, such as management by objectives, critical incident, behaviorally anchored rating scales, and 360-degree feedback.
I minds insights - Operational OptimizationiMindsinsights
How algorithm-based optimization is helping businesses make better decisions and become more efficient.
iMinds insights is a quarterly publication providing you with relevant tech updates based on interviews with academic and industry experts. iMinds is a digital research center and incubator based in Belgium.
iMinds insights - Flipped Knowledge Transfer ModeliMindsinsights
Giving digital startups access to vital research capacity.
iMinds insights is a quarterly publication providing you with relevant tech updates based on interviews with academic and industry experts. iMinds is a digital research center and incubator based in Belgium.
iMinds insights over gezondheidsempowermentiMindsinsights
Mensen leven langer dan ooit tevoren; chronische ziekten komen steeds vaker voor. Mensen moeten meer controle krijgen over hun gezondheid en levensstijl – een proces dat we ‘empowerment’ noemen; en dat kunnen we doen door hen de juiste tools en inzichten te verschaffen.
iMinds insights is een kwartaaluitgave van iMinds waarin telkens een opkomende technologie wordt toegelicht op basis van interviews met academici en het bedrijfsleven.
iMinds insights on citizen health empowermentiMindsinsights
As more people are living longer than before and with chronic disease on the rise, disease prevention alone is no longer enough. Citizens need to take more control over their health – by giving them greater access to their personal health information and equipping them with tools and insights to better manage their lifestyles.
iMinds insights is a quarterly publication providing you with relevant tech updates based on interviews with academic and industry experts. iMinds is a digital research center and incubator based in Belgium.
3D digital rendering technologies and 3d rendering service.pptx3dteamau
Photorealistic 3D rendering isn't just for fun and recreation. It's very important. Many industries use photorealistic renderings. These include areas such as film and filmmaking, interior design, graphic design, and product design.
This document discusses digital stereoscopic imaging techniques. It presents a system for capturing multiple orientation stereo views of museum objects using a high resolution digital camera. Key aspects covered include the superior image quality of high-end digital cameras compared to film and lower-end digital cameras. It also examines various parameters that must be considered when capturing stereoscopic views, such as camera settings, image capture geometry, and background selection, in order to obtain high quality stereoscopic images.
MEMS Laser Scanning, the platform for next generation of 3D Depth SensorsJari Honkanen
MicroVision's MEMS Laser Beam Scanning based 3D Depth Sensing Technology presentation by Jari Honkanen at MEMS & Sensors Industry Group Conference Asia 2016, Shanghai, China, September 13-14, 2016
Explore the transformative world of point cloud technology, a cutting-edge 3D data visualization tool that's reshaping industries from architecture to gaming. Dive into the blog to uncover what point clouds are, how they work, and their diverse applications. Discover how these digital data points can empower architects, surveyors, environmental scientists, and even gamers, offering precision, realism, and immersive experiences. Whether you're a professional in a relevant field or just curious about the digital frontier, point cloud technology promises an exciting journey into the future of 3D data visualization.
Explore the transformative world of point cloud technology, a cutting-edge 3D data visualization tool that's reshaping industries from architecture to gaming. Dive into the blog to uncover what point clouds are, how they work, and their diverse applications. Discover how these digital data points can empower architects, surveyors, environmental scientists, and even gamers, offering precision, realism, and immersive experiences. Whether you're a professional in a relevant field or just curious about the digital frontier, point cloud technology promises an exciting journey into the future of 3D data visualization.
AN Introduction to Augmented Reality(AR)Jai Sipani
Augmented reality (AR) involves overlaying computer-generated information on top of the real world. This document discusses AR systems, which combine real and virtual data in real-time using displays, tracking systems, and mobile computing. Example AR applications include Wikitude, Google Glass, and Pokemon Go. The document also outlines some key components of AR systems like head-mounted displays, tracking orientation, and challenges like tracking accuracy and limited mobile computing power. Overall, the document provides an overview of AR technology, examples, components, applications, and current limitations.
Track 6. Technological innovations in biomedical training and practice
Authors: Santiago González Izard, Juan A. Juanes Méndez, Pablo Ruisoto Palomera and Francisco J. García-Peñalvo
This document discusses 3D technology and its applications in movies, TVs, and mobile devices. It describes how 3D works by capturing two perspectives to create the illusion of depth, and various techniques for producing and displaying 3D content, including 3D cameras, polarization systems, anaglyph glasses, and more. The document also covers 3D TV technologies like eclipse displays and lenticular screens, as well as 3D broadcasting standards and applications of 3D beyond entertainment like design and medicine.
this presentation help to understand about the basic of digital photogrammetry,, its also help for understand about the concept of digital photography software available now a days , and uses of various software in the field of RS and GIS.
This document discusses automatic view synthesis from stereoscopic 3D video through image domain warping. It begins with an introduction to stereoscopic 3D cinema and television, and the need for multi-view auto stereoscopic displays that allow glasses-free 3D viewing. It then describes image domain warping, which synthesizes new views from 2-view video using sparse disparity features and warping images to enforce the disparities, rather than using depth maps. The document outlines the image warping process and view synthesis algorithm, which extracts sparse disparity features, calculates warps to enforce the disparities for intermediate views, and warps the images to synthesize the output views.
Digital design uses computer skills and creativity to design visuals for electronic technology. It includes fields like web design, digital imaging, and 3D modeling. Digital design creates graphics and designs for the web, TV, print, and portable devices using computers, graphics tablets, and other electronic tools. It is an evolving industry that explores new technologies. Digital design has many applications including web design, 3D modeling for movies, architectural planning, and product design. 3D modeling involves creating mathematical representations of objects, placing them in virtual scenes, and rendering them into images. Popular 3D modeling programs include 3Ds Max, Maya, SketchUp, Rhino, CATIA, and SolidWorks.
This document discusses 3D technology and its uses. It is used in films, television, cameras, computer graphics, and various industries like engineering. It works by creating separate images for the left and right eyes to create the illusion of depth. The document outlines several methods for creating and displaying 3D content and discusses challenges and applications in different fields. It predicts that future 3D technology may not require glasses and could allow interacting with 3D images.
MEMS Laser Scanning, the platform for next generation of 3D Depth SensorsMicroVision
MicroVision's PicoP® scanning technology is a MEMS-based Laser Beam Scanning (LBS) solution for pico projection, heads-up-display, and augmented reality eyewear applications. The same flexible technology can also be applied to exciting new sensing applications, such as 3D depth sensing. Demand for small and low cost 3D depth sensing solutions is growing rapidly, driven by increasing demand for new Natural User Interface, Machine Vision, Robotic Navigation, Metrology, and Advanced Driver Assistance System (ADAS) solutions.
This presentation, prepared by MicroVision's Jari Honkanen and presented at the MEMS & Sensors Industry Group Conference Asia 2016, compares the existing 3D depth sensor solutions based on stereo cameras, structured light and 3D CMOS Cameras.
MicroVision then presents a new MEMS LBS depth sensor platform solution that can enable a new generation of tiny 3D depth sensors with capabilities such as dynamic variable resolution and variable acquisition speed. These dynamic LBS depth sensors are an enabling technology for a completely new set of innovative products and applications.
The document discusses Google Cardboard, a low-cost virtual reality headset developed by Google. It can turn smartphones into virtual reality displays. The cardboard headset contains lenses and magnets that allow users to view VR content on their phone through compatible apps. When placed in the headset, the phone's magnetometer detects button presses via magnet to control the VR experience. The headset allows users to explore various VR environments and experiences through apps like YouTube and Google Earth at a low price point, helping make VR more accessible.
Photorealistic 3D rendering is not just for fun and recreation. It's very important. Many industries use photorealistic renderings. These include areas such as cinema and film-making, interior design, graphic design, and product design.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Building Production Ready Search Pipelines with Spark and Milvus
iMinds insights - 3D Visualization Technologies
1. BASED ON INTERVIEWS WITH ACADEMIC AND INDUSTRY EXPERTS - WWW.IMINDS.BE/INSIGHTS
3D VISUALIZATION
TECHNOLOGIES
Transforming the way we
deal with information - from
consumption to interaction
WE SEE A
FUTURE WHERE
3D SENSORS
COMPLETELY
REPLACE 2D
SENSORS.
“
“
Eric Krzeslo
2. EXECUTIVE
SUMMARY
ARE YOU READY FOR YOUR HOLOTABLE?
3D imaging is already part of our
visual environment, not only in
the forms of 3D movies and TV
shows but also as the backbone
of countless 2D visualization
systems. As it becomes more
immersive, 3D will enable whole
new types of entertainment and
professional applications. Yet true
holographic 3D visualization—3D
that won’t require viewers to
wear special glasses and stare
at a flat screen—lies on the other
side of solving some key research
challenges. iMinds is taking an
end-to-end approach to enabling
next-generation 3D visualization,
investigating the required
techniques and technologies from
image capture to processing and
presentation.
ELIMINATING THE FRAME
COMPILING THE PICTURE
One way of generating next-
generation 3D images is to capture
multiple viewing angles with
multiple cameras. Yet it’s financially
and physically impractical to deploy
a camera for every possible view.
This is why researchers are seeking
ways to fill the gaps between
captured viewpoints—for example,
mining available image data and
statistics to fill blanks between
recording angles, interpolating and
creating backgrounds, matching
repeating images and recreating
blocked objects. Researchers are
also seeking to better understand
what underlies our ability to
perceive 3D images in the first
place, tackling the problem of
stereoblindness that makes today’s
3D techniques ineffective—and in
doing so, helping to open up new
possibilities for 3D visualization.
RENDERING THE PICTURE
With3Dvisualization,whatwasonce
a single, flat image is now a massive
block of data requiring immense
processing power to render. Full
holographic 3D today would need
THE
HOLOGRAPHIC
FUTURE MAY BE
MUCH CLOSER
THAN THE
8 TO 10 YEARS
CURRENTLY
EXPECTED.
“
“
02 | iMinds insights
3. exascale computing power and a
massive supply of energy. Today,
researchers are investigating
compression approaches to
streamline processing demands
and relieve network pressure.
That’s been the focus of iMinds’
work with Graphine, a Flemish
company whose innovative texture-
streaming technology improves
graphics quality and load times
while reducing memory usage, disk
and download size.
ADVANCING 3D VISUALIZATION
TECHNOLOGY
While holography is the most
advanced 3D application being
pursued, researchers are also
working to improve traditional
stereoscopy as an interim step—for
example, by using autostereoscopic
imaging that doesn’t require special
glasses. iMinds is collaborating
with companies like Belgian
TV manufacturer TP Vision to
advance this technology and
more fully explore the potential of
autostereoscopic 3D TV.
For holography, the lynchpin
technology will be spatial light
modulators. Currently costing tens
of thousands of dollars apiece,
researchers are working to make
these more economical to realize
the holographic future.
ENVISIONINGTHEFUTURE
The impact of successful 3D
visualization will be immense.
iMinds will be working with
industry and research partners to
further investigate the potential
applications of 3D technology. An
example includes its research with
SoftKinetic, a company looking to
use 3D sensors and software to
revolutionize the way our devices
interact with the environment
around us. The ultimate aim:
bringing the advantages of
holographic 3D technology to
domains such as culture, health and
the economy.
FORMOREINFORMATION
about iMinds’ expertise in the field of
3D visualization, please contact
Peter Schelkens
peter.schelkens@vub.ac.be
+32 2 629 1681
iMinds insights | 03
4. HOW HOLOGRAPHIC 3D WILL REVOLUTIONIZE
OURVISUALEXPERIENCE
Picture it: the World Cup final has
gone into extra time and a penalty
kick is about to decide the match.
The striker lines up the shot, shifting
tensely from foot to foot... and
you’re right beside him. Virtually,
that is—thanks to holographic 3D
visualization technology. You and
millions of other fans are there at
field level, each choosing your own
unique vantage point to watch the
critical moment unfold.
It may sound fanciful, but in fact
Japan made delivering that kind
of 3D experience a key part of its
bid to host the 2022 FIFA World
Cup. Over the coming decade, 3D
visualization will advance by leaps
and bounds, enriching sports,
entertainment and gaming as
well as professional, collaborative
applications such as telesurgery
and computer-aided design. In
each case, next-generation 3D
marks both a technological step
forward and, more profoundly, a
shift away from simple, one-way
visual consumption to immersive,
visual interaction.
3D imaging is already a growing
part of our visual environment—
most conspicuously in the form
of 3D movies and TV shows, but
also as the backbone of countless
2D visualization systems. Yet
true holographic 3D visualization
lies on the other side of solving
some key challenges related
to capturing, processing and
presenting 3D images. Research
organizations around the world are
working actively to address those
challenges, including iMinds, where
ARE YOU READY
FOR YOUR HOLOTABLE?
NEXT-
GENERATION
3DMARKSA
TECHNOLOGICAL
STEP
FORWARD.“
“
04 | iMinds insights
5. teams are taking a uniquely end-
to-end approach to enabling next-
generation 3D visualization.
3D:THEN,NOWANDNEXT
3D imaging is hardly a new
phenomenon. The first 3D patents
were filed in the 1890s and the first
3D film, The Power of Love, screened
in 1922. Early 3D technologies
simulated the functioning of human
eyes by using dual cameras to
capture slightly overlapped images.
With considerable technological
refinement, that stereoscopic
approach continues to be what
most viewers would associate with
3D today: walk into a movie theater
during summer blockbuster season
and you’ll find ushers handing
out ‘3D glasses’ to help the brain
interpret the stereoscopic images.
Setups for recording and
reconstructing a hologram.
Illumination beam
Beam splitter
Beam expander
Beam expander
MIRROR
LASER
HOLOGRAPHIC CAPTURING
HOLOGRAPHIC RENDERING
HOLOGRAM CCD
HOLOGRAM SLM
ObjectbeamObjectbeam
OBJECT
Referencebeam
LASER Reading beam
RECONSTRUCTED IMAGE
iMinds insights | 05
6. While stereoscopy simulates depth
in a two-dimensional image, it is not
truly 3D and has limitations related
to image quality and usability. To
start with, today’s audiences are
used to very high-definition 2D
images, making the relatively low
image quality of stereoscopic 3D
unsatisfying. On top of that, many
viewers find 3D glasses or headgear
annoying, and a small percentage
reports headaches and nausea
when watching stereoscopic 3D
content.
Initially emerging from physicist
Dennis Gabor’s efforts to
improve the electron microscope,
holography involves recording
the scattering of light around an
object, then recreating the pattern
as a 3D image. The invention of the
laser in the 1960s made holography
more practical, enabling a wide
range of current applications for
everything from data storage to
anti-counterfeiting. Yet laser-based
holography also has its limitations,
especially when it comes to
representing complex, multi-angle
imagery.
The next wave of holography—the
kind required to realize Japan’s
vision for the 2022 FIFA World
Cup, for example—requires the use
of digital technologies in an end-to-
end system that is ultimately capable
of projecting three-dimensional
moving images in space.
In such a scenario, viewers will be
able to interact with the image in
any number of ways—adjusting the
viewing angle, zooming in or out,
rotating objects—to personalize the
viewing experience. Many of the
foundational technologies for this
kind of next-generation 3D solution
have already been developed and
are in use today, often to augment
traditional 2D imagery.
TOWARD HIGH-DEFINITION
HOLOGRAPHIC 3D
Anyone who has used Google’s
Street View application has had
a taste of one of today’s key pre-
3D technologies: free viewpoint
visualization. Free viewpoint
captures data from multiple angles
and synthesizes it into a single
image that can be rendered in
either two or three dimensions.
This kind of capture usually
involves deploying standard or
slightly augmented 2D cameras
in an array, or a single standard
camera outfitted with plenoptic
cameras to capture different pixels.
With these technologies, individual
images are captured in two
dimensions and provide the basis
for a panoramic or 3D model that’s
created, rendered and transmitted
to the viewer. 3D images can also
be created from imagery captured
by a standard 2D camera in motion.
THENEXTWAVE
OFHOLOGRAPHY
REQUIRESTHE
USEOFDIGITAL
TECHNOLOGIESIN
ANEND-TO-END
SYSTEM.
“
“
ARE YOU READY
FOR YOUR HOLOTABLE?
TERMINOLOGY:PLENOPTICCAMERA
Plenoptic cameras capture
information about a ‘light
field’ versus a conventional
photographic image. The light
field provides 4D information
and requires an array of
microlenses for recording.
06 | iMinds insights
7. Beyond capturing a view of the
world outside the narrow rectangle
of a two-dimensional frame,
3D developers also need tools
for analyzing the movement of
objects with depth and in motion—
consistently and realistically. Video
game creators have done a lot of
pioneering work in this area already,
creating synthetic 3D content
with support from technologies
like time-of-flight cameras that
interpret 3D environments.
Time-of-flight cameras calculate
depth based on the speed of
light: an object is illuminated with
a laser and the time it takes for
the light to return to its source is
measured. Belgium’s SoftKinetic
used time-of-flight technology in
Sony’s PlayStation 4 console to
track human gestures—for use with
games like the latest edition of the
popular Just Dance franchise. The
potential applications of time-
of-flight technology extend far
beyond gaming, however. From
robotics to topography, a vast
number of fields could make use
of its ultra-sensitive depth data.
While multi-camera and time-
of-flight technologies provide a
basis for 3D imaging, a number
of research challenges need to
be addressed before they can be
incorporated into an end-to-end
holographic visualization solution—
‘end-to-end’ because it has to
cover the full chain of production
from capture and rendering to re-
creation and presentation.
TACKLINGTHERESEARCHCHALLENGES
IMAGE CAPTURE: COMPILING THE
PICTURE
In the multi-camera setup required
for free viewpoint visualization,
image capture is a phased process.
First, visual data is gathered by
the various cameras. Next, those
images have to be calibrated—
in other words, linked together
seamlessly. After calibration, the
content needs to be assembled
in a step developers refer to as
registration.
Once registration is done, the
final set of visual contents will
likely contain some occlusions:
regions that have been recorded
imperfectly, blocked from view >>
END-TO-END VISUALIZATION PROCESS
Image Capture Processing
and Rendering
Visualization
and Representation
iMinds insights | 07
8. or simply left out of the capture.
With today’s multi-camera
approaches, occlusion is more or
less unavoidable. Today’s most
sophisticated screens can support
roughly 200 different viewpoints,
but it would be expensive and
unwieldy to deploy 200 individual
cameras to cover them all.
So the research challenge becomes
one of finding a way to fill the
visual gaps—both textures and
depth cues. Researchers today are
looking at techniques for mining
the image data and statistics that
are available to fill blanks between
recording angles, and at other
methods for interpolating and
creating backgrounds, matching
repeating images and recreating
blocked objects.
The process of producing accurate
3D videos with a mixture of
recorded and artificial imagery
is called inpainting. Its success
depends on it being seamless—
by avoiding ‘artifacts’ or obvious
flaws that remind the viewer what
he or she is watching is not wholly
real. Currently, inpainting is done
in a semi-automated way: the
technology has not yet matured
for computers to perform it
automatically in real time.
PROCESSING AND RENDERING:
BUILDING THE PICTURE
With 3D visualization, what was once
a single, flat image is now a massive
block of data containing a figure,
its surroundings and background
viewed from almost every possible
angle. Today, each additional view
adds about 10 percent more bits
to the overall file size. By the time
200 views are reached, a file can be
massive—and complex.
Rendering such an image requires
immense processing power,
especially if it happens in real time
as the image is being recorded.
Full holographic 3D currently
would require exascale computing
power, roughly 100 times greater
than is available, and about 30 or
40 megawatts of energy. iMinds is
currently involved with developing
standards for MPEG Free View
RESEARCHERS
TODAYARE
LOOKINGAT
TECHNIQUESTO
FILLBLANKS
BETWEEN
RECORDING
ANGLES.
“
“
ARE YOU READY
FOR YOUR HOLOTABLE?
08 | iMinds insights
9. Television (FTV), looking at ways
multi-view video can be efficiently
compressed.
Fortunately, not all views have to be
rendered at once. Reconstructing
a subset of all possible views
conserves processing power and
bandwidth. iMinds researchers and
others are investigating various
compression approaches that would
allow a system to extract only the
views needed at a given moment.
To better create 3D models and
optimize transmission, researchers
are also exploring a variety of
alternate presentation systems—
such as point clouds, which plot
data using a three-dimensional
coordinate system and are easier
to compress. Moving in parallel to
this research are efforts to advance
compression standards, including
a volumetric extension of the
JPEG 2000 standard as well as the
development of mesh- and voxel-
based encoding algorithms and
texture-coding approaches.
Even when a subset of views is
transmitted, extremely low latency
is essential. For applications like
telesurgery, patient safety depends
on an end-to-end information delay
of no more than 20 milliseconds.
For entertainment applications, the
tolerance might be as much as 150
milliseconds—though no sports fan
wants to hear neighbors through
the wall cheering about a goal that
hasn’t yet been displayed on his or
her own TV.
iMinds has conducted research
into latency requirements and
thresholds, including the iCocoon
project, which looked at low-latency
video conferencing in collaboration
with Alcatel-Lucent.
Much about processing, rendering
and the like remain open questions
today. Researchers continue to
work to understand what’s really
required to reconstruct an image
in holographic 3D. Based on the
outcome of those investigations,
teams will be able to tune their
compression algorithms and
visualization devices for optimal
performance. >>
TERMINOLOGY:MESHANDVOXEL
Voxel-based encoding is
closest to reality but, because
of that, requires more
data. Its ideal application
would be, for example, a
hospital environment where
verisimilitude is vital. Mesh-
based encoding is leaner and
less realistic, but its lighter
data demands make it ideal
for gaming.
THE IMINDS TELESURGERY PROJECT
While holographic surgery is still far away, iMinds has been researching
ways to create digital operating rooms that allow real-time consultation
by experts all over the world. The research contributed to Barco’s
Nexxis product, an IP-centric video management tool currently used
in more than 100 operating rooms across Europe.
iMinds insights | 09
10. VISUALIZATION AND REPRE-
SENTATION: AUTOSTEREO-
SCOPIC TECHNOLOGY
The last (and in some ways most
significant) research challenge
with respect to 3D has to do
with visualization itself. While
holography is the end goal, it is
not fully achievable in the short
term. But researchers are actively
advancing models that will deliver
a marked improvement upon
traditional stereoscopy.
One of the most promising of
these is autostereoscopic imaging,
producing 3D images that can be
viewed without special glasses or
headgear. Autostereoscopy can
be created in one of two ways: by
using microlens arrays or by using
light-field technologies.
The light-field approach currently
supports a maximum of 200
views, resulting in a relatively low
resolution, especially given that
consumers are used to 4K visuals
on their TV screens. (Current light-
field autostereoscopic displays
would be the equivalent of VGA
quality, by comparison.) With
microlenses, the number of views is
unlimited but a problem of ‘angular
resolution’ can arise that makes it
hard for the eye to discern small
details in the image.
Unlike classical stereoscopic
imagery, with autostereoscopic
visualization viewers can
physically rotate the image or
move themselves around it. At
current resolutions, these shifts in
views are noticeable—but work is
underway to increase the available
views to 500, which should yield a
significant improvement in image
quality.
THELYNCHPIN
TECHNOLOGYFOR
FIRST-GENERATION
HOLOTABLESWILL
BESPATIALLIGHT
MODULATORS.
“
“
>>
ARE YOU READY
FOR YOUR HOLOTABLE?
10 | iMinds insights
11. 2D LCD DISPLAY
• HD-4K-8K Resolution
• High Frame Rate
• High Dynamic Range
• Stereoscopy with glasses
• Green Power
HALF/FULL PARALLAX LIGHT FIELD DISPLAY
N PROJECTORS
• Reduced horizontal/vertical resolution (70Mpel)
• 200+ viewing angles, 180° field of view (FOV)
• Large set-up, High Computing Power
PINHOLE SCREEN
• Reduced horizontal resolution
• 32 viewing angles, limited FOV
• Compact display
• Green Power
HALF(/FULL) PARALLAX MULIVIEW LCD DISPLAY
• Reduced horizontal/vertical resolution
• 1 view per arc-min, 180° FOV
• Compact set-up
• MW power consumption
HALF/FULL PARALLAX HOLOGRAPHIC DISPLAY
N LASERS
N SLMS
LIGHT WAVEFRONT
DISPLAYS
Conventional (2D), light field,multiview
and holographic display technologies.
iMinds insights | 11
12. THE ‘HOLOTABLE’ AND OTHER
HOLOGRAPHIC DISPLAYS
In its 2022 FIFA World Cup bid,
Japan envisioned establishing
holographic viewing centers in
208 countries that would allow
spectators to watch the action in
real time, projected up from the
horizontal surface of a holographic
table.
The lynchpin technology for first-
generation holotables will be
spatial light modulators (SLMs)—
thousands of which are required to
generate the display’s visual field.
Ideally, these SLMs would have the
same pixel size as a wavelength of
light: at two microns currently, they
are roughly five times too large to
produce an optimal image (SLM-
generated holographic images have
a resolution of about 2K).
Moreover, SLMs are time-
consuming to manufacture and
cost thousands per piece, making
mass production or consumer
applications prohibitive in the
short term. Manufacturing SLMs
from carbon nanotubes is a focus
for several research teams today,
however, and if this reveals a way to
efficiently mass-produce SLMs, the
holographic future may be much
closer than the eight to 10 years
currently expected.
THE WAY FORWARD
Solving the various research
challenges surrounding 3D
visualization demands the dedicated
effort of multidisciplinary research
teams that can collectively address
the end-to-end requirements from
capture to representation.
In addition to a strong history
conducting this type of
collaborative, multidisciplinary
research, iMinds also has the
infrastructure to host teams in
pursuing visualization-related
research.
3D technologies are already
integrated into the way we
consume information. As they
become more fully immersive—en
route to true holography—3D will
shift that consumption to a new
kind of interaction, enabling whole
new types of entertainment and
professional applications.
ARE YOU READY
FOR YOUR HOLOTABLE?
12 | iMinds insights
13. 3D WILL SHIFT
INFORMATION
CONSUMPTION
TO A NEW KIND OF
INTERACTION, ENABLING
WHOLE NEW TYPES
OF ENTERTAINMENT
AND PROFESSIONAL
APPLICATIONS.
“
“
iMinds insights | 13
14. MAKING VIRTUAL
WORLDS LOAD FASTER
The gaming industry is pushing the boundaries of 3D graphics, creating larger
and fuller worlds, levels and objects. But the more detailed the graphics, the
more memory and storage are required. Founded by researchers who met on a
former iMinds project, Flemish company Graphine has developed an innovative
approach to texture-streaming technology that improves graphics quality and
load times while reducing memory usage, disk and download size. We spoke to
Graphine co-founder ALJOSHA DEMEULEMEESTER to find out how the Granite
SDK technology is revolutionizing 3D gaming graphics.
14 | iMinds insights
15. Q: What is texture mapping, and
how is it used?
Aljosha Demeulemeester: In 3D
gaming, objects are represented
by two types of data: geometric
data and texture data. Geometric
data represents the structure of
an object. Texture data defines its
surface properties: the bark on a
tree, the features of human skin. The
level of detail can be quite complex.
Structurally, the side of a building
can be represented by as few as
four geometric data points, one
for each corner. Texturally, though,
you have bricks, windows and
doors with different properties like
colour, roughness, reflection. These
elements can be very intricate.
Q: That has an impact on a gaming
system’s resources.
Aljosha Demeulemeester: Pro–
cessing all that detail can be very
draining in terms of memory and
storage. You can only have as
much detail and texture as you have
memory on your graphics card.
Big worlds mean a lot of textures,
and that creates long load times.
And you have to get the game
to the customer somehow, either
physically or via download, making
storage space another limitation.
Game developers are always hitting
the boundaries of hardware. That’s
why we developed our Granite
technology. It cuts video memory
usage by 50 to 75 percent and
storage requirements by 60 percent
or more. Those savings allow us to
improve graphics quality by a factor
of four and reduce load times by a
factor of five.
Q: How does it work?
Aljosha Demeulemeester: Tradi
tionally, textures are loaded in
bulk. When you start a new level,
the game will load every texture in
that part of the world. Granite uses
a concept called texture streaming
to dramatically shorten load times:
the textures in the level are broken
up into smaller pieces and load
incrementally.
Q: Is texture streaming a new
concept?
Aljosha Demeulemeester: No. What
is new is how small we’re able to
make those pieces, which we call
‘tiles’. Other texture-streaming
technologies might have a player
walk up to a building, and the
textures both inside and outside
the building will be loaded even if
the player hasn’t entered yet. With
Granite, we load only the textures in
front of the virtual camera. As the
camera moves, new textures are
GRANITE
IMPROVES
GRAPHICS
QUALITYBYA
FACTOROF
FOURAND
REDUCELOAD
TIMESBYA
FACTOROF
FIVE.
“
“
>>
iMinds insights | 15
16. loaded. That helps us dramatically
reduce load times, freeing up
memory for even better graphics
and gameplay. You get bigger
worlds, more detailed images and
sharper, higher quality textures.
Think of watching a feature length
HD movie on YouTube. If you had
to load the full movie before you
started playing, you’d have to wait
for quite a while—hours on a slow
connection. But if you load the
movie frame by frame as you watch,
you can start watching instantly.
Q: What games are using this
technology?
Aljosha Demeulemeester: iMinds
helped us connect with Belgium’s
largest game developer, Larian,
during the Language Learning
in an Interactive Environment
project—LLINGO. Larian included
our technology in their Dragon
Commander title in August 2013. It’s
been a very productive partnership.
Normally, a development team
makes technology decisions early
in the process—years before a title
ships. But with Dragon Commander,
they were able to integrate Granite
in the final months of development,
cutting storage size by 60 percent
and texture memory consumption
by 80 percent. We are providing
our tech for other games as well—
including the just-announced sci-fi
title, Get Even, from The Farm 51, the
studio behind the Painkiller series.
Q: What was iMinds’ role in helping
Graphine get started?
Aljosha Demeulemeester: It all
came from iMinds’ LLINGO project,
which was looking at different
ways of using game technology for
education. My section focused on
improving high-quality graphics,
with the idea that the better the
graphics are, the more immersive
and effective the learning
experience will be. That’s where I
met the rest of the Graphine team.
We formed the company using a
lot of the technology and concepts
developed during LLINGO.
Q: How has iMinds remained
involved?
Aljosha Demeulemeester: We took
part in iMinds’ iBoot program,
which is sort of a boot camp in
running a start-up that, at the time,
was delivered through six two-
GRANITE
CUTSSTORAGE
SIZEBY60
PERCENTAND
TEXTUREMEMORY
CONSUMPTIONBY
80PERCENT. “
“
MAKING VIRTUAL
WORLDS LOAD FASTER
16 | iMinds insights
17. day workshops. At each stage we
refined our pitch, and even made a
15-minute presentation to potential
investors. Our pitch was selected as
the best, but the most important
part was the opportunity for us to
work together as a team outside of
the lab. We knew we meshed well
with our research, but we needed
to find out how to work as business
partners.
The other iMinds program we were
involved in was iStart. It provided
seed funds for initial research,
helped us test our assumptions,
investigate the market and contact
potential customers. And it gave us
office space to work from, which is
critical for a small company in its
early stages. Through all of this, we
had the support of the iMinds team,
who provided not only technical
expertise but also connected us
to other researchers, potential
investors and customers.
Q: Do you foresee any applications
for Granite outside of gaming?
Aljosha Demeulemeester: Abso
lutely. Flight and military simulators,
architectural visualization and
design… we’re looking into all of
these. Anything that uses real-time
3D visualization can benefit from
this type of technology.
Q: What’s next for your research
into 3D visualization?
Aljosha Demeulemeester: Well, one
of the biggest challenges we’re
facing now is storage. Breaking a
huge virtual world up into smaller,
discrete pieces certainly helps with
loading and displaying the graphics,
but the size of the world doesn’t
change—and that requires storage.
If you’ve got a 600 gigabyte world,
you still need to get that to the
customer somehow. So one of our
research paths is focused on storage
and texture compression. We are
already boosting Blu-Ray capacity
by a factor of 2.5. We’re hoping
to increase this even further with
similar or better graphics quality.
A second research area is looking at
bringing our technology to mobile
platforms, where it will have a lot
of utility because of the memory
and storage limitations associated
with mobile devices. We’re working
on that with iMinds. Along with our
texture-streaming technology, we
think this research will give game
developers the tools they need
to create bigger, better and more
expansive worlds—on whatever
platform they choose.
ABOUT
GRAPHINE
Combining academic roots
with industry experience,
Graphine delivers rock-solid
middleware and high-quality
services to the gaming and
3D visualization industries.
Its middleware—Granite
SDK—minimizes memory
usage, storage size and
loading times while allowing
the use of massive amounts
of unique texture data.
Graphine also helps clients
integrate its technology
into their products,
providing guidance and
best practices on how to
deliver high-fidelity, real-
time 3D graphics as well as
optimization of technology
stacks.
iMinds insights | 17
18. BRINGING
THE FUTURE
TO YOUR TV
For television manufacturers like
TP Vision, innovation is all about
the viewer experience. New
features need not only to enrich
people’s enjoyment of their
favorite programs but also do
so in ways the marketplace may
not yet fully expect. RIMMERT
WITTEBROOD, Innovation Lead
for Picture Quality and Displays
at TP Vision, explains how that
mission has led TP Vision to
explore the potential of 3D TV.
18 | iMinds insights
19. Q: What is driving TV companies’
efforts to add 3D as a feature?
Rimmert Wittebrood: If you look at
TV technology as it has evolved—
in terms of picture quality, sound,
etc.—3D is a big experience that
has been missing. Television makers
need to cater to the potential
wishes of consumers, so 3D has
been on our minds for many years.
Of course, consumers haven’t
necessarily been saying, “We would
really like our television to deliver a
3D picture.” But one doesn’t always
know what’s missing if one hasn’t
experienced it in the first place.
Nobody asked for a smartphone,
for example: the telecom industry
started adding smart features to
devices and consumers awakened
to the possibilities. 3D could be the
same.
Q: Given that, what’s involved in
bringing 3D to the market?
Rimmert Wittebrood: It requires the
full chain of supply: you need the
right technology, the right content
and the right cost point. Things
started to come together for 3D
a few years ago, which led to the
introduction of the first 3D-enabled
TVs in 2010. I think, though, that
we need to see big investments by
the content industry and the panel
suppliers—the companies that make
LCD screens—to deliver the best 3D
viewing experience.
Q: How big of a market are you
looking at?
Rimmert Wittebrood: Maybe the
current market estimates for 3D are
over-positive, but it’s hard to project
how any feature will evolve or
mature. You see, we don’t make ‘3D
TVs’: 3D is a feature of televisions,
along with many other features.
From that perspective, 3D has been
implemented in more than half the
TVs made by any given supplier.
And of course there are millions and
millions of TV sets in the world, so
the potential market is enormous.
Q: You mention the need for LCD
panel advancements. What are
some of the other barriers currently
limiting the adoption of 3D in
televisions?
Rimmert Wittebrood: One of the
biggest hurdles is that current
stereoscopic 3D technology
requires glasses, which makes
it cumbersome to use. As well,
crosstalk—interference between
the right and left eyes that causes
strain—can make the experience less
comfortable. Autostereoscopic 3D
WE DON’T
MAKE 3D TVS:
3D IS A
FEATURE OF
TELEVISIONS,
ALONG WITH
MANY OTHER
FEATURES.
“
“
BRINGING THE FUTURE
TO YOUR TV
>>
iMinds insights | 19
20. does not require glasses, but it has
other drawbacks: cost, quality and
restrictions on where the viewer
can sit in relation to the display. For
those reasons, autostereoscopic
3D is likelier for business-to-
business markets in the near term,
not the consumer TV marketplace.
For commodity televisions today,
stereoscopic is the only option.
Q: You did some research into
autostereoscopic 3D as part of
the iMinds 3D 2.0 project. How did
that project come about?
Rimmert Wittebrood: In 2010,
we were involved with the
JEDI project—Just Exploring
DImensions—an EU-wide ultra-
high definition and 3D initiative
focused on exploring the next
image formats and how we must
prepare the broadcast chain to
support them. It was through the
other Belgian participants in that
project that we got invited to
join iMinds’ 3D 2.0. That project
was multifaceted: there was our
company, there was Alcatel-Lucent
looking at videoconferencing,
there was GrassValley working on
camera systems, and a network
of university partners supporting
it all. We worked very closely
with the universities on our own
applications, including processing
for autostereoscopic displays. This
led to the development of a fully
working prototype, which has been
displayed at various large trade
shows like the IFA in Berlin and the
CES in Las Vegas.
Q: What did that research yield?
Rimmert Wittebrood: We gained
a good understanding of the
challenges and where more
research needs to be done. For
autostereoscopic 3D, you put a lens
overtop of an ultra-HD panel to steer
the light of the pixels in various
directions. This effectively reduces
the resolution of the display by the
number of viewing angles, or views,
that you create. So if you have nine
views, your resolution is reduced by
a factor of nine. What this means is
you need to start with a very high-
resolution display. In 2012 we were
working with 4K by 2K. Using that
in combination with our proprietary
lens design, the 3D picture quality
would have been comparable to
High-Definition TV, but that’s no
longer sufficient for a market where
Ultra High-Definition is rapidly being
introduced. There are trends now
toward 8K displays. That might be
the innovation we need.
The other issue to resolve relates
to the limited viewing positions.
Because of the way the 3D image
is created, the viewing positions are
divided into a number of cones in
front of the TV. If you’re outside the
cone, the image inverts. This is very
uncomfortable, so we have been
looking into this as well. The most
basic solution is user tracking via a
camera—face detection—so the TV
can direct the cone at the viewer. But
this requires robust face detection,
complex processing and adaptations
in the display itself, which has an
impact on the image quality, the TV
specifications and cost. So today, 3D
TV is necessarily stereoscopic.
Q: But there have been advances
on that front as well. What
differentiates the newest generation
of stereoscopic 3D from the
previous?
Rimmert Wittebrood: There are
two aspects, not so much related
to processing as to the panel
technology. One has been making
the liquid crystal in the displays faster
to react. This increases quality, with
less crosstalk. The other is cost: the
panel industry is focused on driving
costs down, delivering better quality
at a lower price.
ABOUT TP VISION
TP Vision develops, manufactures and markets Philips-branded TV
sets in Europe, Russia, the Middle East, South America and select
countries in Asia-Pacific. A leader in the hospitality industry, it
provides televisions to the majority of the world’s major international
and national hotel groups, as well as individual hotels, hospitals,
cruise ships and other professional facilities.
20 | iMinds insights
21. THE iMINDS’ 3D
2.0 PROJECT LED TO
THE DEVELOPMENT OF
A FULLY WORKING
PROTOTYPE,
WHICH HAS BEEN
DISPLAYED AT VARIOUS
LARGE TRADE
SHOWS.
“
“
iMinds insights | 21
23. Q: What will we gain from more
advanced 3D sensors and software?
Eric Krzeslo: At the highest
level, 3D vision can bring a real
sense of human vision to our
computers, making them much
more intelligent. Take our mobile
devices, for example. We call them
smartphones, but really they’re
quite dumb. You always have to
interact with them, tell them what
you want them to do. Instead of
constantly asking for inputs and
information, a device enabled with
true 3D vision will understand the
user and the user’s environment,
anticipating the user’s needs much
better by actually ‘seeing’ what’s
going on around him or her and
responding intelligently.
Q: How far away are we from that?
Eric Krzeslo: We’re getting to
that point pretty quickly. Sensor
technology is getting small
enough and cheap enough to use
in a wide variety of devices, and
there are viable applications for
these types of technologies. There
are improvements to be made in
resolution, the distance our sensors
and cameras can see, how they deal
with lighting… we’re working on all
that. But we’re really not all that far
off.
SEEING A
3D FUTURE
3D sensors and software will revolutionize the way devices interact with
our environment—and how we interact with those devices. Formed by
the merger of two companies spun out of research groups at the Vrije
Universiteit Brussel, with support from the Université Libre de Bruxelles,
SoftKinetic has become a leader in both 3D cameras and 3D gesture-
recognition software. We spoke with co-founder ERIC KRZESLO about
how new 3D vision technologies will affect our daily lives.
>>
iMinds insights | 23
24. Q: SoftKinetic develops sensors
and software. What are the key
applications and use cases you’re
looking at?
Eric Krzeslo: We started with
gesture recognition, which we
call ‘active control’, with the user
controlling a device; providing it
with input. Gaming is a big use
case here, obviously, with motion-
sensing technology like what you’d
find in a PlayStation 4. We’re also
looking at something we call
‘context awareness’, where the
device gathers information about
its environment and analyzes
what’s happening without direct
input from the user. There are
already early applications of this
kind, for instance, in the automotive
industry. You might have a device
that analyzes road conditions,
weather or even the driver—and
then sounds an alert if he looks
like he’s falling asleep or is driving
too fast for the traffic conditions.
The third area we’re looking at
is ‘real-world acquisition’, which
is basically 3D scanning for 3D
printing: a technology that could
be used in a wide variety of
industries, from tailoring custom
clothes to making perfectly fitted
braces for healthcare.
SEEING A
3D FUTURE
ONE OF OUR
PATENTS LETS
US CAPTURE
3D IMAGES
MUCH MORE
EFFICIENTLY
THAN COMPETING
TIME-OF-FLIGHT
CAMERAS.
“
“
24 | iMinds insights
25. Q: What technologies have you
developed to enable these use
cases?
Eric Krzeslo: Key to our products
and research is our time-of-flight
camera, which is a ‘true’ 3D sensing
technology. While stereoscopic
technologies simply compute 3D
images using 2D data, the time-
of-flight camera actually senses
the 3D by emitting infrared light
that accurately measures depth.
That gives us much more data
and precision to work with. One of
our patents, the Current Assisted
Photonic Demodulation (CAPD),
lets us capture 3D images much
more efficiently than competing
time-of-flight cameras. We use the
depth of the sensor to collect more
photons—and more information—
more efficiently, using less energy
and fewer pixels.
We’ve also developed middleware
called iisu (which stands for “the
interface is you”), which is designed
to improve human gesture
recognition. Full-body tracking
identifies the user, separates him
or her from the environment, and
tracks the position and movement
of the hands, shoulders, feet,
chest, etc. This becomes a
skeleton model that game or app
developers can use to create an
avatar that responds to the users’
movements. We also have a second
part to the iisu middleware that
is much more focused on hands
and fingers and can be used for
close-range applications such as
mobile devices—for things like
virtual keyboards and controlling
applications and games on your
smartphone without actually
touching the screen.
Q: Your gesture-recognition
technology has already hit the
market, correct? How far away are
your other technologies?
Eric Krzeslo: Our software is part of
the PlayStation 4 and has already
been incorporated into Ubisoft’s
Just Dance 2014 game. We’ve
also announced other new games
that will use our technology: Just
Dance 2015, Rabbids Invasion:
The Interactive TV Show and a
game called Commander Cherry’s
Puzzled Journey. That game is quite
interesting, as it uses the shape
of the user’s body to build layers
and platforms in the game. In our
context-awareness division, we’ll
have technology on the market
next year. We’re working with
Delphi in Detroit to develop sensor
technology for cars. And for our
3D reconstruction software, we’re >>
iMinds insights | 25
27. working with MakerBot to develop
a high-precision 3D scanner for use
with 3D printers that is accessible for
consumers. Most of these scanners
run up to $10,000; ours will retail for
a few hundred.
Q: How did SoftKinetic come to be?
Eric Krzeslo: We were born when
iMinds was initiated at ETRO in
the Vrije Universiteit Brussel’s
Department of Electronics and
Informatics, where a lot of research
was being done in 3D sensing
and processing. Simultaneously,
the Université Libre de Bruxelles
gave its support to our visionary
entrepreneurs in need of computer
vision expertise. Two companies
resulted of that research. SoftKinetic
SA was looking at a natural way
of interacting with 3D content,
while Optrima NV was building 3D
cameras. We started using Optrima’s
prototypes for 3D interaction
research and found them much more
effective than the fully developed
cameras already on the market.
We brought the two companies
together so we could control both
the software and the hardware.
Now we can optimize the complete
technology flow, from the pixels to
the camera system to the software.
Q: What’s next for the company?
Eric Krzeslo: Well, in the mid-term,
we’re looking to strengthen our
position in gaming, mobile devices,
3D scanning and the automotive
industry. This may sound strange,
but over the long term we want
to be invisible. We see a future
where 3D sensors completely
replace 2D sensors. I mean, we’ve
got cameras in almost everything.
You’ve probably got two on your
phone, one on your laptop, maybe
one on your TV. And we want to be
providing that technology. We want
to be ubiquitous, in all these devices,
helping interpret what’s happening
and anticipating user needs—not
reading minds, but close to it.
And we’ll get there by developing
sensors that are smaller and less
expensive with greater resolution,
can cover longer distances, and with
software extremely optimized for
active control, context-awareness
and real-world acquisition.
Basically, we want to bring you
and your environment into your
devices—making them smarter and
more responsive, without you even
noticing.
ABOUT
SOFTKINETIC
SoftKinetic is a Brussels-
based semiconductor and
software development
company that provides 3D
time-of-flight sensor and
cameras as well as advanced
middleware for building real-
world acquisition, context
awareness and active control
applications. Its patented
technologies provide a
direct way of acquiring 3D
information about objects,
people and the entire
environment. SoftKinetic’s
3D vision solutions will
revolutionize man/machine
a n d m a c h i n e /wo r l d
interactions in domains such
as entertainment and gaming
applications, automation,
security, automotive, and
health and lifestyle by
providing real-time easy-to
compute 3D images.
iMinds insights | 27
28. WE FOUND STRONG
DIFFERENCES IN
HOW GOOD PEOPLE
PERCEIVE STEREO
WITH 6 TO 10
PERCENT BEING
STEREOBLIND.
“
“
28 | iMinds insights
29. SEEING A WAY
AROUND STEREOBLINDNESS
Today’s consumer 3D technology is stereoscopic—imitating the way people’s brain interprets visual
information to perceive depth in a process known as stereopsis. Yet for a portion of the population,
stereoscopic 3D doesn’t work: they are affected by what’s known as stereoblindness. At Ghent
University, iMinds’ Media and ICT (MICT) research group has developed a new, cheaper and easier
way to test for stereoblindness. We spoke to researchers JAN VAN LOOY and JASMIEN VERVAEKE
about their test and the potential impact of stereoblindness on the future of 3D visualization.
Q: What is stereopsis and how
does it factor into 3D visualization
technology?
Jan Van Looy: Because our eyes are
located in slightly different positions,
they give our brains slightly different
images of the world. These images
are compared and combined in the
brain, creating a single, 3D image
of the world. That process is called
stereopsis, and it’s one—but just
one—of the many ways we can
perceive depth.
Jasmien Vervaeke: Most 3D
visualization technology is based
on stereoscopy, which mimics
stereopsis by making your brain
believe that a flat, 2D image has
depth. In its most basic sense, think
of watching today’s 3D films with
coloured 3D glasses. On screen,
you’d have one image projected
with a red tint, and an image with
a cyan tint, filmed by cameras that
were offset by the distance between
your pupils. Your glasses would have
one cyan lens and one red lens too,
ensuring that each eye would only
receive a single image. Your brain
combines the two images into one
and interprets this as a 3D picture.
Q: How does stereoblindness affect
that?
Jan Van Looy: Stereoblindness is
a condition that interferes with
stereopsis, preventing you from
seeing the stereopsis in stereoscopic
imagery. It tends to affect all types
of stereoscopic imagery, since the
various technologies function in
basically the same way. So whether
you have coloured glasses or
polarizing filters, someone suffering
from stereoblindness won’t be able
to make a 3D image using either.
Jasmien Vervaeke: It’s caused by a
variety of factors—there’s not really
one single cause. The literature
suggests about 10 percent of the
population has stereoblindness to
a certain degree. It can be caused
by several eye disorders, such as a
lazy eye or a substantial difference
in visual acuity between your eyes,
among others. And there’s no
immediate treatment or cure for it
at the moment. >>
iMinds insights | 29
30. Jan Van Looy: Most people with
stereoblindness don’t even know
they have it. There are so many
other ways to perceive depth, as
we mentioned before, that your
brain would more than make up for
it in the real world. It’s only when
you’re trying to watch a 3D movie,
for example, that you might notice.
And, of course, if you get tested.
Q: What made you investigate a
new stereopsis testing method?
Was something wrong with the
previous tests?
Jasmien Vervaeke: The traditional
tests for stereopsis are expensive,
for one, and they require special
equipment. Ours is very simple, by
comparison. It’s basically a random
dot stereogram, so two images that
look like random noise, with one to
10 squares shifted slightly within
the images. It’s displayed on a 3D
TV, and we have participants put
on glasses and count the squares. If
you’re not stereoblind, the squares
pop out of the screen. Each number
of squares is shown twice, resulting
in 20 trials, and the participants end
up with a score out of 20.
Our test can be used on any
3D screen, and we’ve made
it open-source and free for
download through SourceForge
at sourceforge.net/projects/
stereogramtest.
Q: You performed the test at an
iMinds conference, is that right?
Jan Van Looy: We had about 130
volunteers and tested them all to
see how well they were able to
perceive the squares. About six
percent couldn’t see any squares at
all. Some had had scores that were
higher than random, but still not
very good. But the large majority
scored 15 or above.
Jasmien Vervaeke: This suggests
the generally accepted figure of 10
percent of people suffering from
stereoblindness may be a little high.
It may be more between six and
10 percent, and there’s gradation.
In other words, it’s not necessarily
binary—that is, that you have
stereoblindness or you don’t. There
are stages in between.
Q: How are you following up on
your initial research?
Jasmien Vervaeke: We’re working
on a new project right now, an
exploratory study of how well
OUR TEST IS
FREE AND
OPEN SOURCE
SOFTWARE
AND EASY TO
ADMINISTER
ON ANY
HARDWARE.“
“
SEEING A
WAY AROUND
STEREOBLINDNESS
30 | iMinds insights
31. children perceive stereoscopic 3D:
looking at which age stereoscopy
develops, when children reach
an acceptable level and what
factors might affect this ability. It’s
generally assumed that stereopsis
develops after age three, but we
don’t know much more than that.
Jan Van Looy: We’re taking a look
at kids aged six to 12, using our
test as well as examining physical
features such as the distance
between pupils—which changes
with age—and cognitive factors,
things like that. We want to see
if there are correlations between
these factors and the children’s
performance on the stereoscopy
test. Early results seem to indicate
that it is not the distance between
the pupils, as is often assumed, but
the development of the brain—more
particularly, how well both sides of
the brain communicate with one
another—that determines how well
you see stereoscopic 3D.
Q: From your perspective, why is
this research important?
Jan Van Looy: Well, stereoscopy
is really coming back into the
spotlight. Many people have noticed
that 3D stereoscopy in the home,
such as 3D TV, for example, hasn’t
really taken off. And that’s due to
a number of factors. People don’t
like wearing the glasses, for one,
but it also has to do with quality
of experience, which of course is
affected by how well people see 3D.
Jasmien Vervaeke: As 3D
visualization develops and we start
looking at autostereoscopic screens
that don’t require glasses, knowing
more about how the brain sees
3D will help refine the technology.
Being able to measure stereopsis
and detect stereoblindness will
help unlock a larger 3D market
and also potentially unearth new
applications for 3D visualization.
Jan Van Looy: In iMinds projects
like ASPRO+, we’re working with
engineers and companies that are
developing new technologies using
autostereoscopy and multiview
displays, different screens and
processes. Our test can support the
development of these technologies
by providing valuable information
about the consumers of 3D content.
Before you create the technology,
you need to understand the
audience. We’re contributing to
that understanding.
iMinds insights | 31