The document discusses various types of computational displays and technologies, organized by headings and bullet points. It covers topics like high dynamic range display systems, projection technologies, eyeworn displays, lighting-sensitive displays, reflectance displays, and transmission displays. Many of the bullet points provide citations to related works.
We propose a flexible light field camera architecture that is at the convergence of optics, sensor electronics, and applied mathematics. Through the co-design of a sensor that comprises tailored, Angle Sensitive Pixels and advanced reconstruction algorithms, we show thatβcontrary to light field cameras todayβour system can use the same measurements captured in a single sensor image to recover either a high-resolution 2D image, a low-resolution 4D light field using fast, linear processing, or a high-resolution light field using sparsity-constrained optimization.
Driven by the recent resurgence of 3D cinema, depth cameras and stereoscopic displays are becoming commonplace in the consumer market. Introduced last October, Microsoft Kinect has already fostered gesture-based interaction for applications well beyond the intended Xbox 360 platform. Similarly, consumer electronics manufacturers have begun selling stereoscopic displays and inexpensive stereoscopic cameras. Most commercial 3D displays continue to require cumbersome eyewear, but inexpensive, glasses-free 3D displays are imminent with the release of the Nintendo 3DS.
At SIGGRAPH 2010, the Build Your Own 3D Display course demonstrated how to construct both LCD shutter glasses and glasses-free lenticular screens, providing Matlab-based code for batch encoding of 3D imagery. This follow-up course focuses more narrowly on glasses-free displays, describing in greater detail the practical aspects of real-time, OpenGL-based encoding for such multi-view, spatially multiplexed displays.
The course reviews historical and perceptual aspects, emphasizing the goal of achieving disparity, motion parallax, accommodation, and convergence cues without glasses. It summarizes state-of-the-art methods and areas of active research. And it provides a step-by-step tutorial on how to construct a lenticular display. The course concludes with an extended question-and-answer session, during which prototype hardware is available for inspection.
Driven by the recent resurgence of 3D cinema, depth cameras and stereoscopic displays are becoming commonplace in the consumer market. Introduced last October, Microsoft Kinect has already fostered gesture-based interaction for applications well beyond the intended Xbox 360 platform. Similarly, consumer electronics manufacturers have begun selling stereoscopic displays and inexpensive stereoscopic cameras. Most commercial 3D displays continue to require cumbersome eyewear, but inexpensive, glasses-free 3D displays are imminent with the release of the Nintendo 3DS.
At SIGGRAPH 2010, the Build Your Own 3D Display course demonstrated how to construct both LCD shutter glasses and glasses-free lenticular screens, providing Matlab-based code for batch encoding of 3D imagery. This follow-up course focuses more narrowly on glasses-free displays, describing in greater detail the practical aspects of real-time, OpenGL-based encoding for such multi-view, spatially multiplexed displays.
The course reviews historical and perceptual aspects, emphasizing the goal of achieving disparity, motion parallax, accommodation, and convergence cues without glasses. It summarizes state-of-the-art methods and areas of active research. And it provides a step-by-step tutorial on how to construct a lenticular display. The course concludes with an extended question-and-answer session, during which prototype hardware is available for inspection.
Driven by the recent resurgence of 3D cinema, depth cameras and stereoscopic displays are becoming commonplace in the consumer market. Introduced last October, Microsoft Kinect has already fostered gesture-based interaction for applications well beyond the intended Xbox 360 platform. Similarly, consumer electronics manufacturers have begun selling stereoscopic displays and inexpensive stereoscopic cameras. Most commercial 3D displays continue to require cumbersome eyewear, but inexpensive, glasses-free 3D displays are imminent with the release of the Nintendo 3DS.
At SIGGRAPH 2010, the Build Your Own 3D Display course demonstrated how to construct both LCD shutter glasses and glasses-free lenticular screens, providing Matlab-based code for batch encoding of 3D imagery. This follow-up course focuses more narrowly on glasses-free displays, describing in greater detail the practical aspects of real-time, OpenGL-based encoding for such multi-view, spatially multiplexed displays.
The course reviews historical and perceptual aspects, emphasizing the goal of achieving disparity, motion parallax, accommodation, and convergence cues without glasses. It summarizes state-of-the-art methods and areas of active research. And it provides a step-by-step tutorial on how to construct a lenticular display. The course concludes with an extended question-and-answer session, during which prototype hardware is available for inspection.
Driven by the recent resurgence of 3D cinema, depth cameras and stereoscopic displays are becoming commonplace in the consumer market. Introduced last October, Microsoft Kinect has already fostered gesture-based interaction for applications well beyond the intended Xbox 360 platform. Similarly, consumer electronics manufacturers have begun selling stereoscopic displays and inexpensive stereoscopic cameras. Most commercial 3D displays continue to require cumbersome eyewear, but inexpensive, glasses-free 3D displays are imminent with the release of the Nintendo 3DS.
At SIGGRAPH 2010, the Build Your Own 3D Display course demonstrated how to construct both LCD shutter glasses and glasses-free lenticular screens, providing Matlab-based code for batch encoding of 3D imagery. This follow-up course focuses more narrowly on glasses-free displays, describing in greater detail the practical aspects of real-time, OpenGL-based encoding for such multi-view, spatially multiplexed displays.
The course reviews historical and perceptual aspects, emphasizing the goal of achieving disparity, motion parallax, accommodation, and convergence cues without glasses. It summarizes state-of-the-art methods and areas of active research. And it provides a step-by-step tutorial on how to construct a lenticular display. The course concludes with an extended question-and-answer session, during which prototype hardware is available for inspection.
Driven by the recent resurgence of 3D cinema, depth cameras and stereoscopic displays are becoming commonplace in the consumer market. Introduced last October, Microsoft Kinect has already fostered gesture-based interaction for applications well beyond the intended Xbox 360 platform. Similarly, consumer electronics manufacturers have begun selling stereoscopic displays and inexpensive stereoscopic cameras. Most commercial 3D displays continue to require cumbersome eyewear, but inexpensive, glasses-free 3D displays are imminent with the release of the Nintendo 3DS.
At SIGGRAPH 2010, the Build Your Own 3D Display course demonstrated how to construct both LCD shutter glasses and glasses-free lenticular screens, providing Matlab-based code for batch encoding of 3D imagery. This follow-up course focuses more narrowly on glasses-free displays, describing in greater detail the practical aspects of real-time, OpenGL-based encoding for such multi-view, spatially multiplexed displays.
The course reviews historical and perceptual aspects, emphasizing the goal of achieving disparity, motion parallax, accommodation, and convergence cues without glasses. It summarizes state-of-the-art methods and areas of active research. And it provides a step-by-step tutorial on how to construct a lenticular display. The course concludes with an extended question-and-answer session, during which prototype hardware is available for inspection.
Driven by the recent resurgence of 3D cinema, depth cameras and stereoscopic displays are becoming commonplace in the consumer market. Introduced last October, Microsoft Kinect has already fostered gesture-based interaction for applications well beyond the intended Xbox 360 platform. Similarly, consumer electronics manufacturers have begun selling stereoscopic displays and inexpensive stereoscopic cameras. Most commercial 3D displays continue to require cumbersome eyewear, but inexpensive, glasses-free 3D displays are imminent with the release of the Nintendo 3DS.
At SIGGRAPH 2010, the Build Your Own 3D Display course demonstrated how to construct both LCD shutter glasses and glasses-free lenticular screens, providing Matlab-based code for batch encoding of 3D imagery. This follow-up course focuses more narrowly on glasses-free displays, describing in greater detail the practical aspects of real-time, OpenGL-based encoding for such multi-view, spatially multiplexed displays.
The course reviews historical and perceptual aspects, emphasizing the goal of achieving disparity, motion parallax, accommodation, and convergence cues without glasses. It summarizes state-of-the-art methods and areas of active research. And it provides a step-by-step tutorial on how to construct a lenticular display. The course concludes with an extended question-and-answer session, during which prototype hardware is available for inspection.
HR3D: Content Adaptive Parallax Barriers, SIGGRAPH Asia 2010 Technical Paper presentation, presented by Douglas Lanman (http://web.media.mit.edu/~dlanman). Please see the project page for more details: http://web.media.mit.edu/~mhirsch/hr3d
This is a project in the Camera Culture group (http://cameraculture.media.mit.edu) at the MIT Media Lab, led by Professor Ramesh Raskar (http://web.media.mit.edu/~raskar).
DevOps and Testing slides at DASA ConnectKari Kakkonen
Β
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Β
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
We propose a flexible light field camera architecture that is at the convergence of optics, sensor electronics, and applied mathematics. Through the co-design of a sensor that comprises tailored, Angle Sensitive Pixels and advanced reconstruction algorithms, we show thatβcontrary to light field cameras todayβour system can use the same measurements captured in a single sensor image to recover either a high-resolution 2D image, a low-resolution 4D light field using fast, linear processing, or a high-resolution light field using sparsity-constrained optimization.
Driven by the recent resurgence of 3D cinema, depth cameras and stereoscopic displays are becoming commonplace in the consumer market. Introduced last October, Microsoft Kinect has already fostered gesture-based interaction for applications well beyond the intended Xbox 360 platform. Similarly, consumer electronics manufacturers have begun selling stereoscopic displays and inexpensive stereoscopic cameras. Most commercial 3D displays continue to require cumbersome eyewear, but inexpensive, glasses-free 3D displays are imminent with the release of the Nintendo 3DS.
At SIGGRAPH 2010, the Build Your Own 3D Display course demonstrated how to construct both LCD shutter glasses and glasses-free lenticular screens, providing Matlab-based code for batch encoding of 3D imagery. This follow-up course focuses more narrowly on glasses-free displays, describing in greater detail the practical aspects of real-time, OpenGL-based encoding for such multi-view, spatially multiplexed displays.
The course reviews historical and perceptual aspects, emphasizing the goal of achieving disparity, motion parallax, accommodation, and convergence cues without glasses. It summarizes state-of-the-art methods and areas of active research. And it provides a step-by-step tutorial on how to construct a lenticular display. The course concludes with an extended question-and-answer session, during which prototype hardware is available for inspection.
Driven by the recent resurgence of 3D cinema, depth cameras and stereoscopic displays are becoming commonplace in the consumer market. Introduced last October, Microsoft Kinect has already fostered gesture-based interaction for applications well beyond the intended Xbox 360 platform. Similarly, consumer electronics manufacturers have begun selling stereoscopic displays and inexpensive stereoscopic cameras. Most commercial 3D displays continue to require cumbersome eyewear, but inexpensive, glasses-free 3D displays are imminent with the release of the Nintendo 3DS.
At SIGGRAPH 2010, the Build Your Own 3D Display course demonstrated how to construct both LCD shutter glasses and glasses-free lenticular screens, providing Matlab-based code for batch encoding of 3D imagery. This follow-up course focuses more narrowly on glasses-free displays, describing in greater detail the practical aspects of real-time, OpenGL-based encoding for such multi-view, spatially multiplexed displays.
The course reviews historical and perceptual aspects, emphasizing the goal of achieving disparity, motion parallax, accommodation, and convergence cues without glasses. It summarizes state-of-the-art methods and areas of active research. And it provides a step-by-step tutorial on how to construct a lenticular display. The course concludes with an extended question-and-answer session, during which prototype hardware is available for inspection.
Driven by the recent resurgence of 3D cinema, depth cameras and stereoscopic displays are becoming commonplace in the consumer market. Introduced last October, Microsoft Kinect has already fostered gesture-based interaction for applications well beyond the intended Xbox 360 platform. Similarly, consumer electronics manufacturers have begun selling stereoscopic displays and inexpensive stereoscopic cameras. Most commercial 3D displays continue to require cumbersome eyewear, but inexpensive, glasses-free 3D displays are imminent with the release of the Nintendo 3DS.
At SIGGRAPH 2010, the Build Your Own 3D Display course demonstrated how to construct both LCD shutter glasses and glasses-free lenticular screens, providing Matlab-based code for batch encoding of 3D imagery. This follow-up course focuses more narrowly on glasses-free displays, describing in greater detail the practical aspects of real-time, OpenGL-based encoding for such multi-view, spatially multiplexed displays.
The course reviews historical and perceptual aspects, emphasizing the goal of achieving disparity, motion parallax, accommodation, and convergence cues without glasses. It summarizes state-of-the-art methods and areas of active research. And it provides a step-by-step tutorial on how to construct a lenticular display. The course concludes with an extended question-and-answer session, during which prototype hardware is available for inspection.
Driven by the recent resurgence of 3D cinema, depth cameras and stereoscopic displays are becoming commonplace in the consumer market. Introduced last October, Microsoft Kinect has already fostered gesture-based interaction for applications well beyond the intended Xbox 360 platform. Similarly, consumer electronics manufacturers have begun selling stereoscopic displays and inexpensive stereoscopic cameras. Most commercial 3D displays continue to require cumbersome eyewear, but inexpensive, glasses-free 3D displays are imminent with the release of the Nintendo 3DS.
At SIGGRAPH 2010, the Build Your Own 3D Display course demonstrated how to construct both LCD shutter glasses and glasses-free lenticular screens, providing Matlab-based code for batch encoding of 3D imagery. This follow-up course focuses more narrowly on glasses-free displays, describing in greater detail the practical aspects of real-time, OpenGL-based encoding for such multi-view, spatially multiplexed displays.
The course reviews historical and perceptual aspects, emphasizing the goal of achieving disparity, motion parallax, accommodation, and convergence cues without glasses. It summarizes state-of-the-art methods and areas of active research. And it provides a step-by-step tutorial on how to construct a lenticular display. The course concludes with an extended question-and-answer session, during which prototype hardware is available for inspection.
Driven by the recent resurgence of 3D cinema, depth cameras and stereoscopic displays are becoming commonplace in the consumer market. Introduced last October, Microsoft Kinect has already fostered gesture-based interaction for applications well beyond the intended Xbox 360 platform. Similarly, consumer electronics manufacturers have begun selling stereoscopic displays and inexpensive stereoscopic cameras. Most commercial 3D displays continue to require cumbersome eyewear, but inexpensive, glasses-free 3D displays are imminent with the release of the Nintendo 3DS.
At SIGGRAPH 2010, the Build Your Own 3D Display course demonstrated how to construct both LCD shutter glasses and glasses-free lenticular screens, providing Matlab-based code for batch encoding of 3D imagery. This follow-up course focuses more narrowly on glasses-free displays, describing in greater detail the practical aspects of real-time, OpenGL-based encoding for such multi-view, spatially multiplexed displays.
The course reviews historical and perceptual aspects, emphasizing the goal of achieving disparity, motion parallax, accommodation, and convergence cues without glasses. It summarizes state-of-the-art methods and areas of active research. And it provides a step-by-step tutorial on how to construct a lenticular display. The course concludes with an extended question-and-answer session, during which prototype hardware is available for inspection.
Driven by the recent resurgence of 3D cinema, depth cameras and stereoscopic displays are becoming commonplace in the consumer market. Introduced last October, Microsoft Kinect has already fostered gesture-based interaction for applications well beyond the intended Xbox 360 platform. Similarly, consumer electronics manufacturers have begun selling stereoscopic displays and inexpensive stereoscopic cameras. Most commercial 3D displays continue to require cumbersome eyewear, but inexpensive, glasses-free 3D displays are imminent with the release of the Nintendo 3DS.
At SIGGRAPH 2010, the Build Your Own 3D Display course demonstrated how to construct both LCD shutter glasses and glasses-free lenticular screens, providing Matlab-based code for batch encoding of 3D imagery. This follow-up course focuses more narrowly on glasses-free displays, describing in greater detail the practical aspects of real-time, OpenGL-based encoding for such multi-view, spatially multiplexed displays.
The course reviews historical and perceptual aspects, emphasizing the goal of achieving disparity, motion parallax, accommodation, and convergence cues without glasses. It summarizes state-of-the-art methods and areas of active research. And it provides a step-by-step tutorial on how to construct a lenticular display. The course concludes with an extended question-and-answer session, during which prototype hardware is available for inspection.
HR3D: Content Adaptive Parallax Barriers, SIGGRAPH Asia 2010 Technical Paper presentation, presented by Douglas Lanman (http://web.media.mit.edu/~dlanman). Please see the project page for more details: http://web.media.mit.edu/~mhirsch/hr3d
This is a project in the Camera Culture group (http://cameraculture.media.mit.edu) at the MIT Media Lab, led by Professor Ramesh Raskar (http://web.media.mit.edu/~raskar).
DevOps and Testing slides at DASA ConnectKari Kakkonen
Β
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Β
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Β
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Β
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Β
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Β
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Β
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overviewβ
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
1. Edit this text to create a Heading
ο§ Computational Displays as Next-
This subtitle is 20 points
ο§ generation Technology
Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets inFast Forward!
the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
Gordon Wetzstein
(seven). MIT Media Lab
ο§ Sub bullets look like this
2. HDR Display Systems
Edit this text to create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
Local dimming, Sony Micro-dimming, Samsung
3. Edit this text to create β Dual Modulation
HDR Display Systems a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Seetzen et al. 2004, Dolby 2008]
4. Edit this text to create β Dual Modulation
HDR Display Systems a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
[Bimber and Iwai 2008]
(seven). [Bimber et al. 2010]
ο§ Sub bullets look like this
5. Edit this text to create β Dual Modulation
HDR Display Systems a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Kusakabe 2009]
6. HDR Projection β Light Heading
Edit this text to create aReallocation
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Hoskinson 2010]
7. Edit this text to Projectors β Multi-device Systems
Computational create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Raskar et al. 1998] [Majumder and Brown 2007]
8. Edit this text to Projectors β Radiometric Compensation
Computational create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Raskar et al. 2001] [Bimber et al. 2007]
9. Edit this text to Projectors β Dual Photography
Computational create a Heading
[Wetzstein and Bimber 2007]
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ [OβToole and Kutulakos
Longer bullets in the form of2010] a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Sen et al. 2005]
10. Edit this text to Projectors β Synthetic Aperture
Computational create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Levoy et al. 2004]
11. Edit this text to Projectors β Multi-focal Display
Computational create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Bimber and Emmerling 2006]
12. Edit this text to Projectors β Coded Apertures
Computational create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
Contrast Sensitvity Function
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Grosse et al. 2010]
13. Edit this text to Projectors β Coded Apertures
Computational create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
[Grosse et al. 2010]
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
14. Edit this text to Projectors β Superresolution
Computational create a Heading
ο§
[Sajadi et al. 2012]
This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
15. Eyeworn Displays
Edit this text to create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
Steve Mann - Eyetap
(seven).
ο§ Sub bullets look like this
Google
ARToolKit
16. Eyeworn Displays
Edit this text to create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Wetzstein et al. 2010]
17. Eyeworn Displays
Edit this text to create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
Modulation off Modulation on
ο§ Longer bullets in the form of a paragraph are harder to
Color de-metamerization
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
Contrast manipulation Optical object highlighting
[Wetzstein et al. 2010]
18. Lighting-Sensitive Displays (4D)
Edit this text to create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Nayar et al. 2004]
19. Lighting-Sensitive Displays β PixelSense (4D)
Edit this text to create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Microsoft + Samsung 2011]
20. Lighting-Sensitive Displays β Bidi
Edit this text to create a Heading Screen (6D)
ο§ This subtitle is 20 points
ο§ Bullets are blue LCD
ο§ They have 110% line spacing, 2 points before & after
diffuser
ο§ camera
Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven). lights
ο§ Sub bullets look like this
[Hirsch et al. 2009]
21. Lighting-Sensitive Displays β Bidi
Edit this text to create a Heading Screen (6D)
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Hirsch et al. 2011]
22. Lighting-Sensitive Displays β Bidi
Edit this text to create a Heading Screen (6D)
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Hirsch et al. 2011]
23. Lighting-Sensitive Displays β 6D
Edit this text to create a Heading display
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Fuchs et al. 2008]
24. Lighting-Sensitive Displays β 8D
Edit this text to create a Heading displays
[Hirsch et al. 2012]
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
SIGGRAPH 2012 Poster
read if there is insufficient line spacing. This is the
[Tompkinet al. 2012]
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
SIGGRAPH 2012 ETech
25. Computational create a Heading
Edit this text to Reflectance Displays
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
Scratch Holograms
maximum recommended number of lines per slide
[W. Beaty 1995]
(seven).
ο§ Sub bullets look like this
[Regg et al. 2010]
26. Computational create a Heading
Edit this text to Reflectance Displays
[Weyrich et al. 09]
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph BRDF Display
are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Hullin et al. 11]
27. Computational create a Heading
Edit this text to Reflectance Displays
ο§ This subtitle is 20 points
ο§ Bullets are blue
SIGGRAPH 2012 ETech
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Ochiai et al. 12]
28. Computational create a Heading
Edit this text to Reflectance Displays
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
metallic there is insufficient line spacing. This is the
read if diffuse
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Matusik et al. 2009] [Hasan et al. 2010]
29. Computational create a Heading
Edit this text to Transmission Displays
ο§ This subtitle is 20 points
Goal-based
ο§ Bullets are blue
ο§ Caustics
They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
[Papas et al. 11]
read if there is insufficient line spacing. This& Shadows
is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Baran et al. 12]
30. Edit this text to create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
Neri Oxman β MIT Media Lab
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
31. Computational create Heading
Edit this text to RubberaBalloons
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Skouras et al. 2012]
32. Edit this text to create a Heading
Computational Probes
ο§ This subtitle is 20 points
ο§ Bullets are blue transparent,
ο§ They have 110% refractive object points before & after
line spacing, 2
light field probe camera
[Wetzstein et al. 2011a,2011b]
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
33. Edit this text to create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
34. Computational create
Edit this text to Probesa Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Wetzstein et al. 2011a,2011b]
35. Computational create a Heading
Edit this text to Ophthalmology
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
[Pamplona 2010]
(seven).
ο§ Sub bullets look like this
36. Edit this text to Ophthalmology β Refractive Errors
Computational create a Heading
Inverse of Shack-Hartmann, user interactive!
ο§ This subtitle is 20 points
Spot Diagram CellPhone
ο§ Bullets areLCD LCD
on blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
Displace 25
points but
3 parameters
[Pamplona et al. 2010]
37. Computational create a Heading
Edit this text to Ophthalmology - Cataracts
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Pamplona et al. 2011]
38. Computational create a Heading
Edit this text to Ophthalmology - Cataracts
ο§ This subtitle is 20 points
ο§ Bullets are blue Lens
Moving
ο§ They have 110% line spacing, 2 points before & after
patterns
ο§ Longer bullets in the form of a paragraph are harder to
on Screen
read if there is Pinhole
insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
Cell Phone
ο§ Sub bullets look like
Display this
[Pamplona et al. 2011]
39. Edit this text to Ophthalmology β Tailored Displays
Computational create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Pamplona et al. 2012]
40. Edit this text to Ophthalmology β Retinal Imaging
Computational create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Lawson et al. 2012]
41. Edit this text to create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
42. Edit this text to Projectors β Structured Illumination
Computational create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Bruno et al. 2011]
43. Edit this text to Projectors β Inverse Light Transport
Computational create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Wetzstein and Bimber 2007] [OβToole and Kutulakos 2010]
44. Edit this text to Projectors β Dual Photography
Computational create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Sen et al. 2005]
45. Light Probing w/ Computational
Edit this text to create a HeadingIllumination
ο§ This subtitle is 20 points operations!
All-optical
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
Photograph Indirect Illumination Direct Illumination
ο§ Sub bullets look like this
[OβToole et al. 2012]
46. Fabricating Cardboard Models
Edit this text to create a Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
[Hildebrand et al. 12]
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Matusik et al. 2009]
47. Computational create a
Edit this text to Slippers Heading
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub bullets look like this
[Bickel et al. 2010]
48. Fabricating Articulated Heading
Edit this text to create aCharacters
ο§ This subtitle is 20 points
ο§ Bullets are blue
ο§ They have 110% line spacing, 2 points before & after
ο§ Longer bullets in the form of a paragraph are harder to
read if there is insufficient line spacing. This is the
maximum recommended number of lines per slide
(seven).
ο§ Sub mesh
Skinned bullets look like this
Optimized joints 3D printed model
Editor's Notes
Similar ideas have also been applied to increasing the contrast of static prints or other hardcopies. For this purpose, a projector can be used to illuminate the print, an e-reader, x-ray transparencies, or any other type of low-contrast display. As long as the projector is registered with the secondary display, it can just illuminate it with the exact image shown on the hardcopy to increase its dynamic range as seen in these examples on the top.Oliver Bimber also explored the concept of dual modulation for microscopy. The optical design is more involved than for simple printouts, but the idea is the same: a camera observes a specimen and the optics are built so that a programmable light source illuminates it so as to optically enhance the observed contrast. With live camera feedback, the projected images can also be adjusted to allow for dynamic content such as live specimen.
Dual modulation has the potential to increase the dynamic range of a variety of other displays as well. As seen in this schematic, the dynamic range of projectors can be extended through dual modulation. What we see is the design of an HDR projector that basically consists of a light source on the left, a conventional reflective or transmissive spatial light modulator for each color channel in the center, and an additional modulator on the right. While the latter only allows for the modulation of the luminance channel, the dynamic range for displayed luminance values is increased as the blacklevel is decreased. Please note that the human visual system is most sensitive to contrast for luminance perception and not very sensitive to chrominance contrast. In effect, the optical projector design enhances the capabilities of the device in a perceptually optimal manner. Exploiting the limitations of human perception for display optics design and the corresponding computational processing is the spirit of computational displays.
A somewhat more sophisticated approach to high dynamic range projection was recently presented at Siggraph Asia. While the previous HDR projector blocks a lot of the light inside the device to achieve a lower backlevel, this projector recycles excessive background light in dark image areas. Using an analog micro-mirror array in the optical path, excessive light is steered to other image areas and basically increases the maximum image brightness there.Light re-allocation or recycling in projectors is an idea that not only increases the contrast of the devices but also reduces the heat and cooling power consumption because the produced light is steered out of the physical enclosure rather than dumping it inside.This particular project is a great example of how a similar functionality, in this case high dynamic range imaging, can require very different optical designs and corresponding processing depending on whether itβs a projector or a TV. In one case dual modulation may be a great idea because one can mostly control where light is being emitted whereas in a projector one usually does not have that luxury, so reallocation may be a much better option.
Light transport does not always have to be inverted, it can also be transposed. PradeepSen and colleagues have shown that the transpose of the light transport matrix can be useful for generating dual images showing the scene from the point of view of a projector illuminated by a light source at the point of view of a camera. This allows for novel view generation, even unveiling parts of the scene that were only visible by the projector and never by the camera. Relighting a complex scene with novel illumination patters, such as seen in these images, is another application.
Arrays of projectors, here simulated with a single device illuminating an array of mirrors, in combination with random illumination patterns can create a large synthetic aperture projector. As is the case for cameras, large apertures for projectors create a very shallow depth of field. In this particular application, individual depth slices of the scene can selectively be illuminated such as seen for the David statue on the right.
The display acts as the inverse of a Shack-Hartmann sensor that is often used in astronomical imaging to capture an incident wavefront. In this application, the user basically changes the patterns to align in some form in the perceived image, but the displayed pattern itself is predistorted so as to compensate for the refractive errors of the eye.
A very similarly-looking smart-phone clip-on has presented last year at Siggraph with a different purpose: measuring cataracts. In this case, the display basically acts as a radar scanning a pattern over he viewers pupil. The observer simply clicks a few buttons and gets back a detailed map of cataracts on his lens.
A very similarly-looking smart-phone clip-on has presented last year at Siggraph with a different purpose: measuring cataracts. In this case, the display basically acts as a radar scanning a pattern over he viewers pupil. The observer simply clicks a few buttons and gets back a detailed map of cataracts on his lens.
Finally, a new tailored display is presented at this yearβs Siggraph by the same authors. This is a special light field display that has the capability to show a sharp image for an observer that doesnβt need his glasses. It displays the light field corresponding to a 2D image that is moved within the focus range of the observer.
Finally, a new tailored display is presented at this yearβs Siggraph by the same authors. This is a special light field display that has the capability to show a sharp image for an observer that doesnβt need his glasses. It displays the light field corresponding to a 2D image that is moved within the focus range of the observer.
Light transport does not always have to be inverted, it can also be transposed. PradeepSen and colleagues have shown that the transpose of the light transport matrix can be useful for generating dual images showing the scene from the point of view of a projector illuminated by a light source at the point of view of a camera. This allows for novel view generation, even unveiling parts of the scene that were only visible by the projector and never by the camera. Relighting a complex scene with novel illumination patters, such as seen in these images, is another application.