This document discusses compressive displays and related technologies for reducing the bandwidth requirements of multi-view and light field displays. It describes several technologies including layered 3D displays, polarization field displays, and high-rank 3D displays that decompose 4D light fields into lower dimensional representations. It also discusses using mathematical techniques like non-negative matrix factorization for further compressing display data. The document promotes open collaboration through the proposed Compressive Display Consortium to advance next generation displays.
We have built a camera that can look around corners and beyond the line of sight. The camera uses light that travels from the object to the camera indirectly, by reflecting off walls or other obstacles, to reconstruct a 3D shape.
Though revolutionary in many ways, digital photography is essentially electronically implemented film photography. By contrast, computational photography exploits plentiful low-cost computing and memory, new kinds of digitally enabled sensors, optics, probes, smart lighting, and communication to capture information far beyond just a simple set of pixels. It promises a richer, even a multilayered, visual experience that may include depth, fused photo-video representations, or multispectral imagery. Professor Raskar will discuss and demonstrate advances he is working on in the areas of generalized optics, sensors, illumination methods, processing, and display, and describe how computational photography will enable us to create images that break from traditional constraints to retain more fully our fondest and most important memories, to keep personalized records of our lives, and to extend both the archival and the artistic possibilities of photography.
We propose a flexible light field camera architecture that is at the convergence of optics, sensor electronics, and applied mathematics. Through the co-design of a sensor that comprises tailored, Angle Sensitive Pixels and advanced reconstruction algorithms, we show that—contrary to light field cameras today—our system can use the same measurements captured in a single sensor image to recover either a high-resolution 2D image, a low-resolution 4D light field using fast, linear processing, or a high-resolution light field using sparsity-constrained optimization.
Computational Displays in 4D, 6D, 8D
We have explored how light propagates from thin elements into a volume for viewing for both automultiscopic displays and holograms. In particular, devices that are typically connected with geometric optics, like parallax barriers, differ in treatment from those that obey physical optics, like holograms. However, the two concepts are often used to achieve the same effect of capturing or displaying a combination of spatial and angular information. Our work connects the two approaches under a general framework based in ray space, from which insights into applications and limitations of both parallax-based and holography-based systems are observed.
Both parallax barrier systems and the practical holographic displays are limited in that they only provide horizontal parallax. Mathematically, this is equivalent to saying that they can always be expressed as a rank-1 matrix (i.e, a matrix in which all the columns are linearly related). Knowledge of this mathematical limitation has helped us to explore the space of possibilities and extend the capabilities of current display types. In particular, we have designed a display that uses two LCD panels, and an optimisation algorithm, to produce a content-adaptive automultiscopic display (SIGGRAPH Asia 2010).
(Joint work with R Horstmeyer, Se Baek Oh, George Barbastathis, Doug Lanman, Matt Hirsch and Yunhee Kim) http://cameraculture.media.mit.edu
In other work we have developed a 6D optical system that responds to changes in viewpoint as well as changes in surrounding light. Our lenticular array alignment allows us to achieve such a system as a passive setup, omitting the need for electrical components. Unlike traditional 2D flat displays, our 6D displays discretize the incident light field and modulate 2D patterns in order to produce super-realistic (2D) images. By casting light at variable intensities and angles onto our 6D displays, we can produce multiple images as well as store greater information capacity on a single 2D film (SIGGRAPH 2008).
Ramesh Raskar joined the Media Lab from Mitsubishi Electric Research Laboratories in 2008 as head of the Lab’s Camera Culture research group. His research interests span the fields of computational photography, inverse problems in imaging and human-computer interaction. Recent inventions include transient imaging to look around a corner, next generation CAT-Scan machine, imperceptible markers for motion capture (Prakash), long distance barcodes (Bokode), touch+hover 3D interaction displays (BiDi screen), low-cost eye care devices (Netra) and new theoretical models to augment light fields (ALF) to represent wave phenomena.
In 2004, Raskar received the TR100 Award from Technology Review, which recognizes top young innovators under the age of 35, and in 2003, the Global Indus Technovator Award, instituted at MIT to recognize the top 20 Indian technology innovators worldwide. In 2009, he was awarded a Sloan Research Fellowship. In 2010, he received the Darpa Young Faculty award. He holds over 40 US patents and has received four Mitsubishi Electric Invention Awards. He is currently co-authoring a book on Computational Photography. http://raskar.info
We have built a camera that can look around corners and beyond the line of sight. The camera uses light that travels from the object to the camera indirectly, by reflecting off walls or other obstacles, to reconstruct a 3D shape.
Though revolutionary in many ways, digital photography is essentially electronically implemented film photography. By contrast, computational photography exploits plentiful low-cost computing and memory, new kinds of digitally enabled sensors, optics, probes, smart lighting, and communication to capture information far beyond just a simple set of pixels. It promises a richer, even a multilayered, visual experience that may include depth, fused photo-video representations, or multispectral imagery. Professor Raskar will discuss and demonstrate advances he is working on in the areas of generalized optics, sensors, illumination methods, processing, and display, and describe how computational photography will enable us to create images that break from traditional constraints to retain more fully our fondest and most important memories, to keep personalized records of our lives, and to extend both the archival and the artistic possibilities of photography.
We propose a flexible light field camera architecture that is at the convergence of optics, sensor electronics, and applied mathematics. Through the co-design of a sensor that comprises tailored, Angle Sensitive Pixels and advanced reconstruction algorithms, we show that—contrary to light field cameras today—our system can use the same measurements captured in a single sensor image to recover either a high-resolution 2D image, a low-resolution 4D light field using fast, linear processing, or a high-resolution light field using sparsity-constrained optimization.
Computational Displays in 4D, 6D, 8D
We have explored how light propagates from thin elements into a volume for viewing for both automultiscopic displays and holograms. In particular, devices that are typically connected with geometric optics, like parallax barriers, differ in treatment from those that obey physical optics, like holograms. However, the two concepts are often used to achieve the same effect of capturing or displaying a combination of spatial and angular information. Our work connects the two approaches under a general framework based in ray space, from which insights into applications and limitations of both parallax-based and holography-based systems are observed.
Both parallax barrier systems and the practical holographic displays are limited in that they only provide horizontal parallax. Mathematically, this is equivalent to saying that they can always be expressed as a rank-1 matrix (i.e, a matrix in which all the columns are linearly related). Knowledge of this mathematical limitation has helped us to explore the space of possibilities and extend the capabilities of current display types. In particular, we have designed a display that uses two LCD panels, and an optimisation algorithm, to produce a content-adaptive automultiscopic display (SIGGRAPH Asia 2010).
(Joint work with R Horstmeyer, Se Baek Oh, George Barbastathis, Doug Lanman, Matt Hirsch and Yunhee Kim) http://cameraculture.media.mit.edu
In other work we have developed a 6D optical system that responds to changes in viewpoint as well as changes in surrounding light. Our lenticular array alignment allows us to achieve such a system as a passive setup, omitting the need for electrical components. Unlike traditional 2D flat displays, our 6D displays discretize the incident light field and modulate 2D patterns in order to produce super-realistic (2D) images. By casting light at variable intensities and angles onto our 6D displays, we can produce multiple images as well as store greater information capacity on a single 2D film (SIGGRAPH 2008).
Ramesh Raskar joined the Media Lab from Mitsubishi Electric Research Laboratories in 2008 as head of the Lab’s Camera Culture research group. His research interests span the fields of computational photography, inverse problems in imaging and human-computer interaction. Recent inventions include transient imaging to look around a corner, next generation CAT-Scan machine, imperceptible markers for motion capture (Prakash), long distance barcodes (Bokode), touch+hover 3D interaction displays (BiDi screen), low-cost eye care devices (Netra) and new theoretical models to augment light fields (ALF) to represent wave phenomena.
In 2004, Raskar received the TR100 Award from Technology Review, which recognizes top young innovators under the age of 35, and in 2003, the Global Indus Technovator Award, instituted at MIT to recognize the top 20 Indian technology innovators worldwide. In 2009, he was awarded a Sloan Research Fellowship. In 2010, he received the Darpa Young Faculty award. He holds over 40 US patents and has received four Mitsubishi Electric Invention Awards. He is currently co-authoring a book on Computational Photography. http://raskar.info
A compressive approach to light field synthesis with projection devices. We propose a novel, passive screen design that is combined with high-speed light field projection and nonnegative light field factorization. We demonstrate that the projector can alternatively achieve super-resolved and high dynamic range 2D image display when used with a conventional screen.
Millions of people worldwide need glasses or contact lenses to see or read properly. We introduce a computational display technology that predistorts the presented content for an observer, so that the target image is perceived without the need for eyewear. We demonstrate a low-cost prototype that can correct myopia, hyperopia, astigmatism, and even higher-order aberrations that are difficult to correct with glasses.
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 2)Matthew O'Toole
Recent advances in both computational photography and displays have given rise to a new generation of computational devices. Computational cameras and displays provide a visual experience that goes beyond the capabilities of traditional systems by adding computational power to optics, lights, and sensors. These devices are breaking new ground in the consumer market, including lightfield cameras that redefine our understanding of pictures (Lytro), displays for visualizing 3D/4D content without special eyewear (Nintendo 3DS), motion-sensing devices that use light coded in space or time to detect motion and position (Kinect, Leap Motion), and a movement toward ubiquitous computing with wearable cameras and displays (Google Glass).
This short (1.5 hour) course serves as an introduction to the key ideas and an overview of the latest work in computational cameras, displays, and light transport.
Inspired by Wheatstone’s original stereoscope and augmenting it with modern factored light field synthesis, we present a new near-eye display technology that supports focus cues. These cues are critical for mitigating visual discomfort experienced in commercially-available head mounted displays and providing comfortable, long-term immersive experiences.
Lytro Light Field Camera: from scientific research to a $50-million businessWeili Shi
I prepared these slides while I had somehow lost myself. Lytro and its story make one willing to believe again, those brave crazy ones who would like to change the world.
This ppt contains all the details of Stereoscopic imaging. It includes from history, introduction, its working technique, 3D viewers, 3D cameras, future scope, advantages, disadvantages. In all, its the complete stuff that can satisfy anyone.
Tailored Displays to Compensate for Visual Aberrations - SIGGRAPH PresentationVitor Pamplona
Can we create a display that adapts itself to improve one's eyesight? Top figure compares the view of a 2.5-diopter farsighted individual in regular and tailored displays. We use currently available inexpensive technologies to warp light fields to compensate for refractive errors and scattering sites in the eye.
A compressive approach to light field synthesis with projection devices. We propose a novel, passive screen design that is combined with high-speed light field projection and nonnegative light field factorization. We demonstrate that the projector can alternatively achieve super-resolved and high dynamic range 2D image display when used with a conventional screen.
Millions of people worldwide need glasses or contact lenses to see or read properly. We introduce a computational display technology that predistorts the presented content for an observer, so that the target image is perceived without the need for eyewear. We demonstrate a low-cost prototype that can correct myopia, hyperopia, astigmatism, and even higher-order aberrations that are difficult to correct with glasses.
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 2)Matthew O'Toole
Recent advances in both computational photography and displays have given rise to a new generation of computational devices. Computational cameras and displays provide a visual experience that goes beyond the capabilities of traditional systems by adding computational power to optics, lights, and sensors. These devices are breaking new ground in the consumer market, including lightfield cameras that redefine our understanding of pictures (Lytro), displays for visualizing 3D/4D content without special eyewear (Nintendo 3DS), motion-sensing devices that use light coded in space or time to detect motion and position (Kinect, Leap Motion), and a movement toward ubiquitous computing with wearable cameras and displays (Google Glass).
This short (1.5 hour) course serves as an introduction to the key ideas and an overview of the latest work in computational cameras, displays, and light transport.
Inspired by Wheatstone’s original stereoscope and augmenting it with modern factored light field synthesis, we present a new near-eye display technology that supports focus cues. These cues are critical for mitigating visual discomfort experienced in commercially-available head mounted displays and providing comfortable, long-term immersive experiences.
Lytro Light Field Camera: from scientific research to a $50-million businessWeili Shi
I prepared these slides while I had somehow lost myself. Lytro and its story make one willing to believe again, those brave crazy ones who would like to change the world.
This ppt contains all the details of Stereoscopic imaging. It includes from history, introduction, its working technique, 3D viewers, 3D cameras, future scope, advantages, disadvantages. In all, its the complete stuff that can satisfy anyone.
Tailored Displays to Compensate for Visual Aberrations - SIGGRAPH PresentationVitor Pamplona
Can we create a display that adapts itself to improve one's eyesight? Top figure compares the view of a 2.5-diopter farsighted individual in regular and tailored displays. We use currently available inexpensive technologies to warp light fields to compensate for refractive errors and scattering sites in the eye.
Inform- interacting with a dynamic shape displayHari Teja Joshi
ABSTRACT
Past research on shape displays has primarily focused on rendering content and user interface elements through shape output, with less emphasis on dynamically changing UIs. We propose utilizing shape displays in three different ways to mediate interaction: to facilitate by providing dynamic physical affordances through shape change, to restrict by guiding users with dynamic physical constraints, and to manipulate by actuating physical objects. We outline potential interaction techniques and introduce Dynamic Physical Affordances and Constraints with our inFORM system, built on top of a state-of-the-art shape display, which provides for variable stiffness rendering and real-time user input through direct touch and tangible interaction. A set of motivating examples demonstrates how dynamic affordances, constraints and object actuation can create novel interaction possibilities.
HR3D: Content Adaptive Parallax Barriers, SIGGRAPH Asia 2010 Technical Paper presentation, presented by Douglas Lanman (http://web.media.mit.edu/~dlanman). Please see the project page for more details: http://web.media.mit.edu/~mhirsch/hr3d
This is a project in the Camera Culture group (http://cameraculture.media.mit.edu) at the MIT Media Lab, led by Professor Ramesh Raskar (http://web.media.mit.edu/~raskar).
MIT Program on Information Science Talk -- Ophir Frieder on Searching in Hars...Micah Altman
Ophir Frieder, who holds the Robert L. McDevitt, K.S.G., K.C.H.S. and Catherine H. McDevitt L.C.H.S. Chair in Computer Science and Information Processing at Georgetown University, gave this talk on Searching in Harsh Environments as part of the Program on Information Science Brown Bag Series.
In the talk, illustrated by the slides below, Ophir rebuts the myth that "google has solved search", and discusses the challenges of searching for complex object, through hidden collections, and in harsh environments For more see: http://informatics.mit.edu/blg
Image restoration techniques covered such as denoising, deblurring and super-resolution for 3D images and models.
From classical computer vision techniques to contemporary deep learning based processing for both ordered and unordered point clouds, depth maps and meshes.
A maskless exposure device for rapid photolithographic prototyping of sensor ...Dhanesh Rajan
A very cost effective maskless exposure device (MED) for the fast lithographic prototyping of various layouts is presented. The device is assembled using a digital light processing projector (DLP), an optical microscope, alignment stages and a web camera. Layouts created on a computer screen can be easily transferred to substrate surfaces without using expensive photomasks and the process can be repeated by introducing new drawings on the screen. Components are tuned for a constant area of exposure and a resolution of around 20 μm is possible at the moment without using any reduction lenses. The MED has been used in patterning the surfaces of silicon, glass, metal etc. successfully. The device can be assembled using commercially available components at a very minimum cost and can be effectively used in fast prototyping applications like in MEMS, microfluidics, patterning of sensor and electrode structures.
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...PetteriTeikariPhD
Shallow literature analysis on recent trends in computational ophthalmic imaging with focus on neurodegenerative disease imaging / oculomics.
Open-ended literature review on what you could be building next.
#1/2: Hardware
#2/2: Computational imaging
Alternative download link:
https://www.dropbox.com/scl/fi/d34pgi3xopfjbrcqj2lvi/retina_imaging_2024_computational.pdf?rlkey=xnt1dbe8rafyowocl9cbgjh3p&dl=0
Keywords: Signal processing, Applied optics, Computer graphics and vision, Electronics, Art, and Online photo collections
A computational camera attempts to digitally capture the essence of visual information by exploiting the synergistic combination of task-specific optics, illumination, sensors and processing. We will discuss and play with thermal cameras, multi-spectral cameras, high-speed, and 3D range-sensing cameras and camera arrays. We will learn about opportunities in scientific and medical imaging, mobile-phone based photography, camera for HCI and sensors mimicking animal eyes.
We will learn about the complete camera pipeline. In several hands-on projects we will build several physical imaging prototypes and understand how each stage of the imaging process can be manipulated.
We will learn about modern methods for capturing and sharing visual information. If novel cameras can be designed to sample light in radically new ways, then rich and useful forms of visual information may be recorded -- beyond those present in traditional protographs. Furthermore, if computational process can be made aware of these novel imaging models, them the scene can be analyzed in higher dimensions and novel aesthetic renderings of the visual information can be synthesized.
In this couse we will study this emerging multi-disciplinary field -- one which is at the intersection of signal processing, applied optics, computer graphics and vision, electronics, art, and online sharing through social networks. We will examine whether such innovative camera-like sensors can overcome the tough problems in scene understanding and generate insightful awareness. In addition, we will develop new algorithms to exploit unusual optics, programmable wavelength control, and femto-second accurate photon counting to decompose the sensed values into perceptually critical elements.
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze how Light Field Technology is becoming economic feasible for an increasing number of applications. Light Field Cameras record all of the light fields in a picture instead of just one light field. This capability enables users to change the focus of pictures after they have been taken and to more easily record 3D data. These features are becoming economically feasible improvements because of rapid improvements in camera chips and micro-lens arrays (an example of micro-electronic mechanical systems, MEMS). These features offer alternative ways to do 3D sensing for automated vehicles and augmented reality and can enable faster data collection with telescopes.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
ACM SIGGRAPH is delighted to present the 2017 Computer Graphics Achievement Award to Ramesh Raskar in recognition of his pioneering contributions to the fields of computational photography and light transport and for applying these technologies for social impact.
https://www.siggraph.org/about/awards/2017-cg-achievement-award-ramesh-raskar/
I recently gave a talk at ICCP 2015 and clarified that we should stop working on coded aperture for focus effects! (Thus negating my team's work in this area.). I also spoke about the lost decade of computational photography and how we have wasted too many years working on the wrong problems.
The way back to normal starts here
We all want to get out of the house. To reopen the economy. To feel secure again. Safe Paths builds tools that help communities flatten the curve of COVID-19 — together. CovidSafePaths.org
Video of the talk at https://www.youtube.com/watch?v=x9TCYuMUnco
Friction in data sharing is a large challenge for large scale machine learning. Emerging technologies in domains such as biomedicine, health and finance benefit from distributed deep learning methods which can allow multiple entities to train a deep neural network without requiring data sharing or resource aggregation at one single place. The talk will explore the main challenges in data friction that make capture, analysis and deployment of ML. The challenges include siloed and unstructured data, privacy and regulation of data sharing and incentive models for data transparent ecosystems. The talk will compare distributed deep learning methods of federated learning and split learning. Our team at MIT has pioneered a range of approaches including automated machine learning (AutoML), privacy preserving machine learning (PrivateML) and intrinsic as well as extrinsic data valuation (Data Markets). One of the programs at MIT aims to create a standard for data transparent ecosystems that can simultaneously address the privacy and utility of data.
Bio: Ramesh Raskar is an Associate Professor at MIT Media Lab and directs the Camera Culture research group. His focus is on AI and Imaging for health and sustainability. They span research in physical (e.g., sensors, health-tech), digital (e.g., automated and privacy-aware machine learning) and global (e.g., geomaps, autonomous mobility) domains. He received the Lemelson Award (2016), ACM SIGGRAPH Achievement Award (2017), DARPA Young Faculty Award (2009), Alfred P. Sloan Research Fellowship (2009), TR100 Award from MIT Technology Review (2004) and Global Industry Technovator Award (2003). He has worked on special research projects at Google [X], Apple Privacy Team and Facebook and co-founded/advised several companies. Project page https://splitlearning.github.io/" Ramesh Raskar is an Associate Professor at MIT Media Lab and directs the Camera Culture research group. His focus is on Machine Learning and Imaging for health and sustainability. They span research in physical (e.g., sensors, health-tech), digital (e.g., automated and privacy-aware machine learning) and global (e.g., geomaps, autonomous mobility) domains.
In his recent role at Facebook, he launched and led innovation teams in Digital Health, Health-tech, Satellite Imaging, TV and Bluetooth bandwidth for Connectivity, VR/AR and ‘Emerging Worlds’ initiative for FB.
At MIT, his co-inventions include camera to see around corners, femto-photography, automated machine learning (auto-ML), private ML, low-cost eye care devices (Netra,Catra, EyeSelfie), a novel CAT-Scan machine, motion capture (Prakash), long distance barcodes (Bokode), 3D interaction displays (BiDi screen), new theoretical models to augment light fields (ALF) to represent wave phenomena and algebraic rank constraints for 3D displays(HR3D).
Video: https://www.youtube.com/watch?v=2jq_5FaQbTg
After different rejections, the project of a lifetime Ramesh Raskar (associate professor at MIT) finally comes to life.
How did he manage to get his way out of this jungle of misleading signs and career traps? By becoming a pathfinder: always tense towards your goal but also critical and ready to adjust his strategy to reach it.
An incredible life lesson that he gave us in this talk at the last FAIL at Massachusetts Institute of Technology (MIT).
https://www.youtube.com/watch?v=2jq_5FaQbTg&feature=youtu.be&fbclid=IwAR3aAo7SIiCuHY_6ICTjXLOpNBUBwEEJUq72pD-V8N2nX2cWaVIxtPM1gBM
Ramesh Raskar is an Associate Professor at MIT Media Lab and directs the Camera Culture research group. His focus is on AI and Imaging for health and sustainability. These interfaces span research in physical (e.g., sensors, health-tech), digital (e.g., automating machine learning) and global (e.g., geomaps, autonomous mobility) domains. He received the Lemelson Award (2016), ACM SIGGRAPH Achievement Award (2017), DARPA Young Faculty Award (2009), Alfred P. Sloan Research Fellowship (2009), TR100 Award from MIT Technology Review (2004) and Global Indus Technovator Award (2003). He has worked on special research projects at Google [X] and Facebook and co-founded/advised several companies.
http://raskar.info or CameraCulture Wiki Page
How to come up w ideas: Idea Hexagon
How to write a paper
How to give a talk
Open research problems
How to decide merit of a project
How to attend a conference, brainstorm
Strive for Five
Before 5 teams
Be early, let others do details
Beyond 5 years
What no one is thinking about
Within 5 steps of Human Impact
Relevance
Beyond 5 mins of instruction
Deep, iterative, participatory
Fusing 5+ Expertise
Fun, barrier for others
Associate Professor, MIT Media Lab
Ramesh Raskar is founder of the Camera Culture research group at the Massachusetts Institute of Technology (MIT) Media Lab and associate professor of Media Arts and Sciences at MIT. Raskar is the co-inventor of radical imaging solutions including femto-photography, an ultra-fast imaging camera that can see around corners, low-cost eye-care solutions for the developing world and a camera that allows users to read pages of a book without opening the cover. He is a pioneer in the fields of imaging, computer vision and machine learning.
Raskar’s focus is on building interfaces between social systems and cyber-physical systems. These interfaces span research in physical (e.g., sensors, health-tech), digital (e.g., tools to enable keeping data private in distributed machine learning applications) and global (e.g., geomaps, autonomous mobility) domains. Recent inventions by Raskar’s team include transient imaging to look around a corner, a next-generation CAT-scan machine, imperceptible markers for motion capture, long-distance barcodes, touch + hover 3D interaction displays and new theoretical models to augment light fields to represent wave phenomena.
Raskar has dedicated his career to linking the best of the academic and entrepreneurial worlds with young engineers, igniting a passion for impact inventing. Raskar seeks to catalyze change on a massive scale by launching platforms that empower inventors to create solutions to improve lives globally.
Raskar has received the Lemelson Award, ACM SIGGRAPH Achievement Award, DARPA Young Faculty Award, Alfred P. Sloan Research Fellowship, TR100 Award from MIT Technology Review and Global Indus Technovator Award. He has worked on special research projects at Google [X] and Facebook and co-founded and advised several companies. He holds more than 80 US patents.
Making the Invisible Visible: Within Our Bodies, the World Around Us, and Beyond
We need to transition from analysis to synthesis when it comes to large scale image based studies of satellite or street level images.
Large scale, image based studies have the ability to unlock the human potential and really address some of the most important societal problems. The question really is, are we going to do that through analysis or are we going to step up to the game and actually start doing synthesis? Are we only go to study and observations or are we going to go and actually make an impact in the society?
Can global image repositories help UN's sustainable development goals (SDGs)? help us understand the social determinants of health? Satellite imagery, Google street view and user contributed photos from a global image repository are being used for large scale image-based studies, visual census and sentiment analysis [Ermon][http://StreetScore.media.mit.edu]. But we need to go beyond simply relying on big data for investigating social questions via remote analysis. We need to transition from analysis to synthesis. For deployable social solutions, we need to consider the full stack of physical devices, organizational interests and sector-specific resources.
Image-based large studies allow us to predict poverty from daytime and nighttime satellite imagery which can influence critical decisions for aid and development planning. In project ‘StreetScore’, our group has shown that semantic analysis of street level imagery such as Google Streetview, can provide varied insights rich in urban perception; our recent project ‘StreetChange’ shows the benefits of time-series data in driving these insights (http://streetchange.media.mit.edu).
We have seen some amazing work and you'll hear from Stephano about poverty mapping my glove previous collaborators to a population density crop maps, Betaine. So we had been, that's been fantastic progress in, in using a global industry, uh, in, in these areas that are taken from satellites or drones and then a street level imagery is also very widely available, either very structured like Google street view, but also from a user contributor photos and to that Nikki like and others in my group have been working on can we do a sentiment analysis of, of this imagery in this case, sentiment analysis of the perceived safety just for Google Street and main street and then create kind of citywide maps of a perceived safety that can be used by city planners and urban planners. So, which is great. But coming back to analysis versus synthesis opportunities, I'm going to give you a flavor of one of the projects we worked on a which is street addresses.
Project page: https://splitlearning.github.io/
Papers: https://arxiv.org/search/cs?searchtype=author&query=Raskar
Video: https://www.youtube.com/watch?v=8GtJ1bWHZvg
Split learning for health: Distributed deep learning without sharing raw patient data: https://arxiv.org/pdf/1812.00564.pdf
Distributed learning of deep neural network over multiple agents
https://www.sciencedirect.com/science/article/pii/S1084804518301590
Otkrist Gupta, Ramesh Raskar,
In domains such as health care and finance, shortage of labeled data and computational resources is a critical issue while developing machine learning algorithms. To address the issue of labeled data scarcity in training and deployment of neural network-based systems, we propose a new technique to train deep neural networks over several data sources. Our method allows for deep neural networks to be trained using data from multiple entities in a distributed fashion. We evaluate our algorithm on existing datasets and show that it obtains performance which is similar to a regular neural network trained on a single machine. We further extend it to incorporate semi-supervised learning when training with few labeled samples, and analyze any security concerns that may arise. Our algorithm paves the way for distributed training of deep neural networks in data sensitive applications when raw data may not be shared directly.
What is SIGGRAPH NEXT?
By Juliet Fiss
What will be the next big thing at SIGGRAPH, and how can the SIGGRAPH community contribute in an impactful way to fields outside of traditional computer graphics? SIGGRAPH NEXT at SIGGRAPH 2015 explored these questions. In this new addition to the SIGGRAPH program, an eclectic set of speakers gave TED-style talks and posed grand challenges to the SIGGRAPH community. In this blog post, Professor Ramesh Raskar of the MIT Media Lab introduces SIGGRAPH NEXT and outlines his vision for it.
What will be the next big thing at SIGGRAPH?
The SIGGRAPH community has a set of hammers that it uses to solve problems: geometry processing, rendering, animation, and imaging. What will be the next hammer, the next major field of study, appear at SIGGRAPH? Let’s examine where our research ideas come from. Often, advances in machine learning, optimization, signal processing, and optics forge our hammers. Our selection of hammer also depends on the nails we see. The most common application areas of computer graphics currently include computer-aided design, movies, games, and photography.
We often ask: “Does this work contribute to SIGGRAPH techniques?”
We should also ask, “Does this work contribute SIGGRAPH techniques to _____?”
When we answer the challenges posed by these traditional application areas of computer graphics, we are “drinking our own champagne.” We have made amazing progress in these application areas, and we should celebrate! SIGGRAPH NEXT is about finding new varieties of champagne; for that, we need new varieties of grapes. We should invite others from nontraditional and emerging application areas to enjoy our champagne with us, and they will become part of our community. First, we can expand our work in existing areas like mobile, user interaction, virtual reality, fabrication, and new types of cameras. We can also expand into emerging areas such as healthcare, energy, education, entrepreneurship, materials, tissue fabrication, and social media. What’s next?
Professor Raskar highlights three top areas where we can make an impact. One big take-home message is that many of these applications involve biology: bio is the new digital, and it will affect us ubiquitously.
'Media' is a plural for medium. The medium for impact of digital technologies at MIT Media Lab can be photons, electrons, neurons, atoms, cells, musical notes and more.
Over the last 40 years, computing has moved from processor, network, social and more sensory.
MIT Media Lab works at the intersection of computing and such media for human-centric technologies.
Ramesh Raskar
MIT Media Lab
Ramesh Raskar is an Associate Professor at MIT Media Lab. Ramesh Raskar joined the Media Lab from Mitsubishi Electric Research Laboratories in 2008 as head of the Lab’s Camera Culture research group. His research interests span the fields of computational photography, inverse problems in imaging and human-computer interaction. Recent projects and inventions include transient imaging to look around a corner, a next generation CAT-Scan machine, imperceptible markers for motion capture (Prakash), long distance barcodes (Bokode), touch+hover 3D interaction displays (BiDi screen), low-cost eye care devices (Netra,Catra), new theoretical models to augment light fields (ALF) to represent wave phenomena and algebraic rank constraints for 3D displays(HR3D).
In 2004, Raskar received the TR100 Award from Technology Review, which recognizes top young innovators under the age of 35, and in 2003, the Global Indus Technovator Award, instituted at MIT to recognize the top 20 Indian technology innovators worldwide. In 2009, he was awarded a Sloan Research Fellowship. In 2010, he received the Darpa Young Faculty award. Other awards include Marr Prize honorable mention 2009, LAUNCH Health Innovation Award, presented by NASA, USAID, US State Dept and NIKE, 2010, Vodafone Wireless Innovation Project Award (first place), 2011. He holds over 50 US patents and has received four Mitsubishi Electric Invention Awards. He is currently co-authoring a book on Computational Photography.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
2. Slow Glass: Time Shifted Display
Light of Other Days by Bob Shaw
http://www.fantasticfiction.co.uk/s/bob-shaw/other-days-other-eyes.htm http://baens-universe.com/articles/otherdays
3. Shift Glass
Space Shifting
Angle Shifting
Time Shifting
Illumination Shifting
4D 4D t 4D 4D
4. Motivation: Glasses Free 3D Displays
Eliminating Eyewear Expanding FoV and DoF for Glasses-free Displays
Eliminating Moving Parts Thin Holographic Displays
(Preserving Depth Cues) (Developing or Avoiding High-Res SLM)
Favarola et al Michael Bove et al
7. Opportunities: New Hardware + New Math
Emerging Displays
Multilayer High Frame Rate Directional Backlighting
Compression and Embedded Processing
Non-negative Matrix Factorization (NMF) Non-negative Tensor Factorization (NTF)
8. Camera Culture
Creating new ways to capture and share visual information
MIT Media Lab
Ramesh Raskar
http://cameraculture.info
Facebook.com/cameraculture
Computational Photography Femtosecond Imaging 3D Displays
1. Looking around corners 1. Tensor Display
1.Light-Field Camera A family of compressive light field
A new camera design exploiting the Using short laser pulses and fast detector, we aim to build
a device that can look around corners with no imaging displays comprising all architectures
fundamental dictionary of light-fields
device in the line of sight using time resolved transient employing a stack of time-multiplexed,
for a single-capture capture of light-
imaging. light-attenuating layers illuminated by
fields with full-resolution refocusing
uniform or directional backlighting
effects.
2. Layered 3D
2. Color Primaries Tomographic techniques for image
A new camera design with
synthesis on displays composed of
switchable color filter arrays for compact volumes of light-attenuating
optimal color fidelity and picture material. Such volumetric attenuators
quality on scene geometry, color recreate a 4D light field or high-contrast
and illumination. 2D image when illuminated by a uniform
2. Reflectance Recovery backlight.
We demonstrate a new technique that allows a camera to
3. Flutter-Shutter rapidly acquire reflectance properties of objects 'in the wild' 3. Glasses-free 3D HDTV
A camera that codes the exposure from a single viewpoint, over relatively long distances and Light field displays with increased
time with a binary pseudo-sequence without encircling equipment. brightness and refresh rate by stacking a
to de-convolve and remove motion
pair of modified LCD panels, exploiting
blur in textured backgrounds and rank and constraint of 3D displays
partial occluders. 3. Trillion Frames per Second Imaging
A camera fast enough to capture light pulses moving
through objects. We can use such a camera to understand 4. BIDI Screen
reflectance, absorption and scattering properties of A thin, depth-sensing LCD for 3D
4. Compressive Capture materials. interaction using light fields which
We analyze the gamut of visual
supports both 2D multi-touch and
signals from low-dimensional
unencumbered 3D gestures.
images to light-fields and propose
non-adaptive projections for
efficient sparsity exploiting 5. Living Windows 6D Display
reconstruction. A completely passive display that
responds to changes in viewpoint and
changes in incident light conditions.
May 2012
9. Health & Wellness Human Computer Interaction Visual Social Computing
1. Retinal Imaging 1. Bokode 1. Photocloud
With simplified optics and cleaver Low-cost, passive optical design so A near real-time system for
illumination we visualize images of the that bar codes can be shrunk to fewer interactively exploring a collectively
retina in a standalone device easily than 3mm and read by ordinary captured moment without explicit 3D
operated by the end user. cameras several meters away. reconstruction.
2. NETRA/CATRA
Low-cost cell-phone attachments that 2. Specklesense 2. Vision Blocks
measures eye-glass prescription and Set of motion-sensing configurations On-demand, in-browser,
cataract information from the eye. based on laser speckle sensing . The customizable, computer-vision
3. Cellphone Microscopy underlying principles allow application-building platform for the
A platform for computational microscopy interactions to be fast, precise, masses. Without any prior
and remote healthcare extremely compact, and low cost. programming experience, users can
create and share computer vision
4. High-speed Tomography 3. Sound Around applications.
A compact, fast CAT scan machine using
Soundaround is a multi-viewer
no mechanical moving parts or
interactive audio system, designed to
synchronization.
be integrated into multi-view displays
3. Lenschat
presenting localized audio/video LensChat allows users to share
5. Shield Fields channels with no need for glasses or mutual photos with friends or borrow
3D reconstruction of objects from a single headphones. the perspective and abilities of many
shot photo using spatial heterodyning. cameras.
6. Second Skin
Using 3D motion tracking with real-time
Visit us online at
vibrotactile feedback aids the correct of
movement and position errors to improve
motor learning. Cameraculture.info
fb.com/cameraculture
Light Propagation Theory and Fourier Optics
1. Augmented Light Fields 2. Hologram v Parallax Barrier
Expands light field representations to Defines connections between parallax 3. Ray–Based Diffraction Model
describe phase and diffraction effects by barrier displays and holographic displays by
Simplified capture of diffraction model for
using the Wigner Distribution Function analyzing their operations and limitations in
computer graphics applications.
phase space
Post-Doctorial Researchers: Doug Lanman, Gordon Wetzstein, Alex Olwal, Christopher Barsi
Research Assistants: Matthew Hirsch, Otkrist Gupta, Nikhil Naik, Jason Boggess, Everett Lawson, Aydın Arpa, Kshitij Marwah
Visiting Researchers & Students: Di Wu, Daryl Lim
17. MIT media lab camera culture EyeNetra.com
NETRA: Refractive Error on Mobile Phone
Siggraph 2010
Vitor Pamplona Ankit Mohan Manuel Oliveira Ramesh Raskar
18
19. Camera Culture: Compressive Displays Team
Gordon Wetzstein Matthew Hirsch Douglas Lanman
Postdoctoral Associate Graduate Student Postdoctoral Associate
Wolfgang Heidrich, Professor, University of British Columbia
Yunhee Kim, Postdoctoral Fellow, MIT Media Lab
20. 3D Display: Light and Rank Deficient
Parallax
barrier
Front
Back
LCD display
31. Is a hologram just another ray-based light field?
Can a hologram create any intensity distribution in 3D?
Why does a hologram create a “wavefront”, but parallax barrier does not?
Why does a hologram create accommodation cues?
What are the effective resolution and depth of field for holograms vs. barriers?
33. Augmented Light Field
wave optics based
rigorous but cumbersome
Wigner
WDF
Distribution
Function Augmented LF
Traditional Traditional
Light Field Light Field
ray optics based
Interference & Diffraction
simple and powerful Interaction w/ optical elements
Oh, Raskar, Barbastathis 2009: Augmented Light Field
34
37. Generalizing Parallax Barriers: Rank 1
mask K
…
mask 3
mask 2 mask 2 mask 2
mask 1 mask 1 mask 1
light box light box light box
Conventional Parallax Barrier High-Rank 3D (HR3D) Layered 3D and Polarization Fields
Parallax barriers use heuristic design: front mask with slits/pinholes, rear mask with interlaced views
High-Rank 3D (HR3D) considers dual-layer design with arbitrary opacity and temporal multiplexing
Layered 3D and Polarization Fields considers multi-layer design without temporal multiplexing
38. Layered 3D: Multi-Layer Automultiscopic Displays
mask K
…
mask 3
mask 2
mask 1
light box
Layered 3D
39. Tomographic Light Field Synthesis
virtual plane
Image formation model:
attenuator
ò
- m (r )dr
L(x, q ) = I 0 e C
æ L(x, q ) ö
L(x, q ) = ln ç ÷ = - ò m (r)dr
è I0 ø C
backlight l = -Pa
Tomographic synthesis:
2
arg min l + Pa , for a ³ 0
a
2D Light Field
40. Tomographic Light Field Synthesis
virtual plane
Image formation model:
attenuator
ò
- m (r )dr
L(x, q ) = I 0 e C
æ L(x, q ) ö
L(x, q ) = ln ç ÷ = - ò m (r)dr
è I0 ø C
backlight l = -Pa
Tomographic synthesis:
2
arg min l + Pa , for a ³ 0
a
2D Light Field
41. Multi-Layer Light Field Decomposition
Reconstructed Views
Target 4D Light Field
Multi-Layer Decomposition
42. Prototype Layered 3D Display
Transparency stack with acrylic spacers Prototype in front of LCD (backlight source)
43.
44. Polarization Fields
Four Stacked Liquid Crystal Panels
Two Crossed Polarizers
61. Camera Culture: Compressive Displays Team
Gordon Wetzstein Matthew Hirsch Douglas Lanman
Postdoctoral Associate Graduate Student Postdoctoral Associate
Wolfgang Heidrich, Professor, University of British Columbia
Yunhee Kim, Postdoctoral Fellow, MIT Media Lab
62. Raskar, Lanman, Wetzstein, Hirsch MIT Media Lab http://cameraculture.info
Shift Glass
Capture Display
5D: Looking Compressive Displays
around corners 6D: View and Lighting Aware
4D: Rank Deficient, multilayer
4D: Netra for Optometry
WDF
Analyze G
Augmented
Light LF
~
`
L
= F
Field
4D, 6D, 8D: Augmented Light Field
63. Raskar, Lanman, Wetzstein, Hirsch MIT Media Lab http://cameraculture.info
Layered 3D Polarization Fields High-Rank 3D (HR3D)
www.layered3d.info tinyurl.com/polarization-fields www.hr3d.info
Slow Display 6D Display BiDi Screen
tinyurl.com/slow-display tinyurl.com/6d-display www.bidiscreen.com
64. Compressive Display Research in Camera Culture
Ramesh Raskar, Douglas Lanman, Gordon Wetzstein, Matthew Hirsch
http://cameraculture.media.mit.edu/compressivedisplays