1. Ramesh Raskar is an associate professor at the MIT Media Lab researching computational photography.
2. Raskar discusses three levels of computational photography - epsilon, coded, and essence photography. Coded photography uses single or few snapshots but introduces reversible encoding of light through techniques like coded exposure and coded apertures.
3. Examples of coded photography techniques presented include flutter shutter motion deblurring, coded aperture defocus, optical heterodyning for lightfield or wavefront sensing, and using a coded glare mask. The goal is to create new imaging capabilities beyond what is possible with traditional cameras.
Slides from the presentation made to the Flash/Flex User Group in Wellington.
Introduction to the Kinect sensors and how to read their data with actionscript.
Though revolutionary in many ways, digital photography is essentially electronically implemented film photography. By contrast, computational photography exploits plentiful low-cost computing and memory, new kinds of digitally enabled sensors, optics, probes, smart lighting, and communication to capture information far beyond just a simple set of pixels. It promises a richer, even a multilayered, visual experience that may include depth, fused photo-video representations, or multispectral imagery. Professor Raskar will discuss and demonstrate advances he is working on in the areas of generalized optics, sensors, illumination methods, processing, and display, and describe how computational photography will enable us to create images that break from traditional constraints to retain more fully our fondest and most important memories, to keep personalized records of our lives, and to extend both the archival and the artistic possibilities of photography.
Presentación de la Universidad de Granada sobre cambios de color en escenarios naturales debidos a la interacción entre luz y atmósfera, realizada durante las jornadas HOIP 2010 organizadas por la Unidad de Sistemas de Información e Interacción TECNALIA.
Más información en http://www.tecnalia.com/es/ict-european-software-institute/index.htm
HIVE: holographic immersive virtual environments
MetaZtron Vision laser projector applications
Provides patent licensing information/ patent attorneys
Technology and Diane Troyer background.
MetaSphere hubs with Z*Rama screens and ZELF labs.
Z*Rama: Dome, Cinerama, Planetarium, Performance screens
ZELF: Zone Enhanced Location Fusion labs – HIVE applications.
Themed Edutainment: Location Based Entertainment
MetaStar: Philanthropic Model for sustainable community.
MetaSite: HIVE Community bottom up holodeck playpen turnkey
Innovation and Content tools for the local community.
Innovation and high end JOBS, JOBS, JOBS – new businesses.
STEAM TEAMS: Put A (art) into STEM –
New forms of immersive Edutainment and Health care.
Bottom up security and immersive first responder training
Communities thrive.
What Third Generation Blogging Means To YouCompendium
This presentation outlines how third generation business blogging is here to stay. It also discusses how blogging effectively communications information. The presentation shows how blogging has evolved over time and is a beneficial technological advance.
Реклама PVS-Studio - статический анализ кода на языке Си и Си++Andrey Karpov
Этот документ рекламирует статический анализатор PVS-Studio. Описывается, как использование PVS-Studio уменьшит количество ошибок в коде проекта на языке C/C++/C++11 и сократит затраты на тестирование, отладку и сопровождение кода. Приводится большое количество примеров ошибок, найденных анализатором в различных Open-Source проектах. Документ описывает PVS-Studio на момент версии 4.38 от 12 октября 2011 и, как следствие, не отражает возможности следующих версий. Чтобы познакомиться с новыми возможностями, предлагаем посетить сайт продукта <a>http://www.viva64.com</a> или поискать обновленный вариант этой статьи.
Slides from the presentation made to the Flash/Flex User Group in Wellington.
Introduction to the Kinect sensors and how to read their data with actionscript.
Though revolutionary in many ways, digital photography is essentially electronically implemented film photography. By contrast, computational photography exploits plentiful low-cost computing and memory, new kinds of digitally enabled sensors, optics, probes, smart lighting, and communication to capture information far beyond just a simple set of pixels. It promises a richer, even a multilayered, visual experience that may include depth, fused photo-video representations, or multispectral imagery. Professor Raskar will discuss and demonstrate advances he is working on in the areas of generalized optics, sensors, illumination methods, processing, and display, and describe how computational photography will enable us to create images that break from traditional constraints to retain more fully our fondest and most important memories, to keep personalized records of our lives, and to extend both the archival and the artistic possibilities of photography.
Presentación de la Universidad de Granada sobre cambios de color en escenarios naturales debidos a la interacción entre luz y atmósfera, realizada durante las jornadas HOIP 2010 organizadas por la Unidad de Sistemas de Información e Interacción TECNALIA.
Más información en http://www.tecnalia.com/es/ict-european-software-institute/index.htm
HIVE: holographic immersive virtual environments
MetaZtron Vision laser projector applications
Provides patent licensing information/ patent attorneys
Technology and Diane Troyer background.
MetaSphere hubs with Z*Rama screens and ZELF labs.
Z*Rama: Dome, Cinerama, Planetarium, Performance screens
ZELF: Zone Enhanced Location Fusion labs – HIVE applications.
Themed Edutainment: Location Based Entertainment
MetaStar: Philanthropic Model for sustainable community.
MetaSite: HIVE Community bottom up holodeck playpen turnkey
Innovation and Content tools for the local community.
Innovation and high end JOBS, JOBS, JOBS – new businesses.
STEAM TEAMS: Put A (art) into STEM –
New forms of immersive Edutainment and Health care.
Bottom up security and immersive first responder training
Communities thrive.
What Third Generation Blogging Means To YouCompendium
This presentation outlines how third generation business blogging is here to stay. It also discusses how blogging effectively communications information. The presentation shows how blogging has evolved over time and is a beneficial technological advance.
Реклама PVS-Studio - статический анализ кода на языке Си и Си++Andrey Karpov
Этот документ рекламирует статический анализатор PVS-Studio. Описывается, как использование PVS-Studio уменьшит количество ошибок в коде проекта на языке C/C++/C++11 и сократит затраты на тестирование, отладку и сопровождение кода. Приводится большое количество примеров ошибок, найденных анализатором в различных Open-Source проектах. Документ описывает PVS-Studio на момент версии 4.38 от 12 октября 2011 и, как следствие, не отражает возможности следующих версий. Чтобы познакомиться с новыми возможностями, предлагаем посетить сайт продукта <a>http://www.viva64.com</a> или поискать обновленный вариант этой статьи.
Keywords: Signal processing, Applied optics, Computer graphics and vision, Electronics, Art, and Online photo collections
A computational camera attempts to digitally capture the essence of visual information by exploiting the synergistic combination of task-specific optics, illumination, sensors and processing. We will discuss and play with thermal cameras, multi-spectral cameras, high-speed, and 3D range-sensing cameras and camera arrays. We will learn about opportunities in scientific and medical imaging, mobile-phone based photography, camera for HCI and sensors mimicking animal eyes.
We will learn about the complete camera pipeline. In several hands-on projects we will build several physical imaging prototypes and understand how each stage of the imaging process can be manipulated.
We will learn about modern methods for capturing and sharing visual information. If novel cameras can be designed to sample light in radically new ways, then rich and useful forms of visual information may be recorded -- beyond those present in traditional protographs. Furthermore, if computational process can be made aware of these novel imaging models, them the scene can be analyzed in higher dimensions and novel aesthetic renderings of the visual information can be synthesized.
In this couse we will study this emerging multi-disciplinary field -- one which is at the intersection of signal processing, applied optics, computer graphics and vision, electronics, art, and online sharing through social networks. We will examine whether such innovative camera-like sensors can overcome the tough problems in scene understanding and generate insightful awareness. In addition, we will develop new algorithms to exploit unusual optics, programmable wavelength control, and femto-second accurate photon counting to decompose the sensed values into perceptually critical elements.
How to do research, Idea Hexagon, Rank and Sparsity in imaging problems, Looking around corners, compressive sensing of periodic phenomena, 3D displays, fast computation
We have built a camera that can look around corners and beyond the line of sight. The camera uses light that travels from the object to the camera indirectly, by reflecting off walls or other obstacles, to reconstruct a 3D shape.
If you are inspired by an idea 'X', how will you come up with the neXt idea? This presentation shows 6 different ways you can exercise your mind in an attempt to develop the next cool idea.
http://raskar.info
http://cameraculture.info
SIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time RaytracingElectronic Arts / DICE
In this presentation part of the "Introduction to DirectX Raytracing" course, Colin Barré-Brisebois of SEED discusses some of the challenges the team had to go through when going from raster to real-time raytracing for Project PICA PICA.
We propose a flexible light field camera architecture that is at the convergence of optics, sensor electronics, and applied mathematics. Through the co-design of a sensor that comprises tailored, Angle Sensitive Pixels and advanced reconstruction algorithms, we show that—contrary to light field cameras today—our system can use the same measurements captured in a single sensor image to recover either a high-resolution 2D image, a low-resolution 4D light field using fast, linear processing, or a high-resolution light field using sparsity-constrained optimization.
Computational Displays in 4D, 6D, 8D
We have explored how light propagates from thin elements into a volume for viewing for both automultiscopic displays and holograms. In particular, devices that are typically connected with geometric optics, like parallax barriers, differ in treatment from those that obey physical optics, like holograms. However, the two concepts are often used to achieve the same effect of capturing or displaying a combination of spatial and angular information. Our work connects the two approaches under a general framework based in ray space, from which insights into applications and limitations of both parallax-based and holography-based systems are observed.
Both parallax barrier systems and the practical holographic displays are limited in that they only provide horizontal parallax. Mathematically, this is equivalent to saying that they can always be expressed as a rank-1 matrix (i.e, a matrix in which all the columns are linearly related). Knowledge of this mathematical limitation has helped us to explore the space of possibilities and extend the capabilities of current display types. In particular, we have designed a display that uses two LCD panels, and an optimisation algorithm, to produce a content-adaptive automultiscopic display (SIGGRAPH Asia 2010).
(Joint work with R Horstmeyer, Se Baek Oh, George Barbastathis, Doug Lanman, Matt Hirsch and Yunhee Kim) http://cameraculture.media.mit.edu
In other work we have developed a 6D optical system that responds to changes in viewpoint as well as changes in surrounding light. Our lenticular array alignment allows us to achieve such a system as a passive setup, omitting the need for electrical components. Unlike traditional 2D flat displays, our 6D displays discretize the incident light field and modulate 2D patterns in order to produce super-realistic (2D) images. By casting light at variable intensities and angles onto our 6D displays, we can produce multiple images as well as store greater information capacity on a single 2D film (SIGGRAPH 2008).
Ramesh Raskar joined the Media Lab from Mitsubishi Electric Research Laboratories in 2008 as head of the Lab’s Camera Culture research group. His research interests span the fields of computational photography, inverse problems in imaging and human-computer interaction. Recent inventions include transient imaging to look around a corner, next generation CAT-Scan machine, imperceptible markers for motion capture (Prakash), long distance barcodes (Bokode), touch+hover 3D interaction displays (BiDi screen), low-cost eye care devices (Netra) and new theoretical models to augment light fields (ALF) to represent wave phenomena.
In 2004, Raskar received the TR100 Award from Technology Review, which recognizes top young innovators under the age of 35, and in 2003, the Global Indus Technovator Award, instituted at MIT to recognize the top 20 Indian technology innovators worldwide. In 2009, he was awarded a Sloan Research Fellowship. In 2010, he received the Darpa Young Faculty award. He holds over 40 US patents and has received four Mitsubishi Electric Invention Awards. He is currently co-authoring a book on Computational Photography. http://raskar.info
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 2)Matthew O'Toole
Recent advances in both computational photography and displays have given rise to a new generation of computational devices. Computational cameras and displays provide a visual experience that goes beyond the capabilities of traditional systems by adding computational power to optics, lights, and sensors. These devices are breaking new ground in the consumer market, including lightfield cameras that redefine our understanding of pictures (Lytro), displays for visualizing 3D/4D content without special eyewear (Nintendo 3DS), motion-sensing devices that use light coded in space or time to detect motion and position (Kinect, Leap Motion), and a movement toward ubiquitous computing with wearable cameras and displays (Google Glass).
This short (1.5 hour) course serves as an introduction to the key ideas and an overview of the latest work in computational cameras, displays, and light transport.
ACM SIGGRAPH is delighted to present the 2017 Computer Graphics Achievement Award to Ramesh Raskar in recognition of his pioneering contributions to the fields of computational photography and light transport and for applying these technologies for social impact.
https://www.siggraph.org/about/awards/2017-cg-achievement-award-ramesh-raskar/
I recently gave a talk at ICCP 2015 and clarified that we should stop working on coded aperture for focus effects! (Thus negating my team's work in this area.). I also spoke about the lost decade of computational photography and how we have wasted too many years working on the wrong problems.
The way back to normal starts here
We all want to get out of the house. To reopen the economy. To feel secure again. Safe Paths builds tools that help communities flatten the curve of COVID-19 — together. CovidSafePaths.org
Video of the talk at https://www.youtube.com/watch?v=x9TCYuMUnco
Friction in data sharing is a large challenge for large scale machine learning. Emerging technologies in domains such as biomedicine, health and finance benefit from distributed deep learning methods which can allow multiple entities to train a deep neural network without requiring data sharing or resource aggregation at one single place. The talk will explore the main challenges in data friction that make capture, analysis and deployment of ML. The challenges include siloed and unstructured data, privacy and regulation of data sharing and incentive models for data transparent ecosystems. The talk will compare distributed deep learning methods of federated learning and split learning. Our team at MIT has pioneered a range of approaches including automated machine learning (AutoML), privacy preserving machine learning (PrivateML) and intrinsic as well as extrinsic data valuation (Data Markets). One of the programs at MIT aims to create a standard for data transparent ecosystems that can simultaneously address the privacy and utility of data.
Bio: Ramesh Raskar is an Associate Professor at MIT Media Lab and directs the Camera Culture research group. His focus is on AI and Imaging for health and sustainability. They span research in physical (e.g., sensors, health-tech), digital (e.g., automated and privacy-aware machine learning) and global (e.g., geomaps, autonomous mobility) domains. He received the Lemelson Award (2016), ACM SIGGRAPH Achievement Award (2017), DARPA Young Faculty Award (2009), Alfred P. Sloan Research Fellowship (2009), TR100 Award from MIT Technology Review (2004) and Global Industry Technovator Award (2003). He has worked on special research projects at Google [X], Apple Privacy Team and Facebook and co-founded/advised several companies. Project page https://splitlearning.github.io/" Ramesh Raskar is an Associate Professor at MIT Media Lab and directs the Camera Culture research group. His focus is on Machine Learning and Imaging for health and sustainability. They span research in physical (e.g., sensors, health-tech), digital (e.g., automated and privacy-aware machine learning) and global (e.g., geomaps, autonomous mobility) domains.
In his recent role at Facebook, he launched and led innovation teams in Digital Health, Health-tech, Satellite Imaging, TV and Bluetooth bandwidth for Connectivity, VR/AR and ‘Emerging Worlds’ initiative for FB.
At MIT, his co-inventions include camera to see around corners, femto-photography, automated machine learning (auto-ML), private ML, low-cost eye care devices (Netra,Catra, EyeSelfie), a novel CAT-Scan machine, motion capture (Prakash), long distance barcodes (Bokode), 3D interaction displays (BiDi screen), new theoretical models to augment light fields (ALF) to represent wave phenomena and algebraic rank constraints for 3D displays(HR3D).
Video: https://www.youtube.com/watch?v=2jq_5FaQbTg
After different rejections, the project of a lifetime Ramesh Raskar (associate professor at MIT) finally comes to life.
How did he manage to get his way out of this jungle of misleading signs and career traps? By becoming a pathfinder: always tense towards your goal but also critical and ready to adjust his strategy to reach it.
An incredible life lesson that he gave us in this talk at the last FAIL at Massachusetts Institute of Technology (MIT).
https://www.youtube.com/watch?v=2jq_5FaQbTg&feature=youtu.be&fbclid=IwAR3aAo7SIiCuHY_6ICTjXLOpNBUBwEEJUq72pD-V8N2nX2cWaVIxtPM1gBM
Ramesh Raskar is an Associate Professor at MIT Media Lab and directs the Camera Culture research group. His focus is on AI and Imaging for health and sustainability. These interfaces span research in physical (e.g., sensors, health-tech), digital (e.g., automating machine learning) and global (e.g., geomaps, autonomous mobility) domains. He received the Lemelson Award (2016), ACM SIGGRAPH Achievement Award (2017), DARPA Young Faculty Award (2009), Alfred P. Sloan Research Fellowship (2009), TR100 Award from MIT Technology Review (2004) and Global Indus Technovator Award (2003). He has worked on special research projects at Google [X] and Facebook and co-founded/advised several companies.
http://raskar.info or CameraCulture Wiki Page
How to come up w ideas: Idea Hexagon
How to write a paper
How to give a talk
Open research problems
How to decide merit of a project
How to attend a conference, brainstorm
Strive for Five
Before 5 teams
Be early, let others do details
Beyond 5 years
What no one is thinking about
Within 5 steps of Human Impact
Relevance
Beyond 5 mins of instruction
Deep, iterative, participatory
Fusing 5+ Expertise
Fun, barrier for others
Associate Professor, MIT Media Lab
Ramesh Raskar is founder of the Camera Culture research group at the Massachusetts Institute of Technology (MIT) Media Lab and associate professor of Media Arts and Sciences at MIT. Raskar is the co-inventor of radical imaging solutions including femto-photography, an ultra-fast imaging camera that can see around corners, low-cost eye-care solutions for the developing world and a camera that allows users to read pages of a book without opening the cover. He is a pioneer in the fields of imaging, computer vision and machine learning.
Raskar’s focus is on building interfaces between social systems and cyber-physical systems. These interfaces span research in physical (e.g., sensors, health-tech), digital (e.g., tools to enable keeping data private in distributed machine learning applications) and global (e.g., geomaps, autonomous mobility) domains. Recent inventions by Raskar’s team include transient imaging to look around a corner, a next-generation CAT-scan machine, imperceptible markers for motion capture, long-distance barcodes, touch + hover 3D interaction displays and new theoretical models to augment light fields to represent wave phenomena.
Raskar has dedicated his career to linking the best of the academic and entrepreneurial worlds with young engineers, igniting a passion for impact inventing. Raskar seeks to catalyze change on a massive scale by launching platforms that empower inventors to create solutions to improve lives globally.
Raskar has received the Lemelson Award, ACM SIGGRAPH Achievement Award, DARPA Young Faculty Award, Alfred P. Sloan Research Fellowship, TR100 Award from MIT Technology Review and Global Indus Technovator Award. He has worked on special research projects at Google [X] and Facebook and co-founded and advised several companies. He holds more than 80 US patents.
Making the Invisible Visible: Within Our Bodies, the World Around Us, and Beyond
We need to transition from analysis to synthesis when it comes to large scale image based studies of satellite or street level images.
Large scale, image based studies have the ability to unlock the human potential and really address some of the most important societal problems. The question really is, are we going to do that through analysis or are we going to step up to the game and actually start doing synthesis? Are we only go to study and observations or are we going to go and actually make an impact in the society?
Can global image repositories help UN's sustainable development goals (SDGs)? help us understand the social determinants of health? Satellite imagery, Google street view and user contributed photos from a global image repository are being used for large scale image-based studies, visual census and sentiment analysis [Ermon][http://StreetScore.media.mit.edu]. But we need to go beyond simply relying on big data for investigating social questions via remote analysis. We need to transition from analysis to synthesis. For deployable social solutions, we need to consider the full stack of physical devices, organizational interests and sector-specific resources.
Image-based large studies allow us to predict poverty from daytime and nighttime satellite imagery which can influence critical decisions for aid and development planning. In project ‘StreetScore’, our group has shown that semantic analysis of street level imagery such as Google Streetview, can provide varied insights rich in urban perception; our recent project ‘StreetChange’ shows the benefits of time-series data in driving these insights (http://streetchange.media.mit.edu).
We have seen some amazing work and you'll hear from Stephano about poverty mapping my glove previous collaborators to a population density crop maps, Betaine. So we had been, that's been fantastic progress in, in using a global industry, uh, in, in these areas that are taken from satellites or drones and then a street level imagery is also very widely available, either very structured like Google street view, but also from a user contributor photos and to that Nikki like and others in my group have been working on can we do a sentiment analysis of, of this imagery in this case, sentiment analysis of the perceived safety just for Google Street and main street and then create kind of citywide maps of a perceived safety that can be used by city planners and urban planners. So, which is great. But coming back to analysis versus synthesis opportunities, I'm going to give you a flavor of one of the projects we worked on a which is street addresses.
Project page: https://splitlearning.github.io/
Papers: https://arxiv.org/search/cs?searchtype=author&query=Raskar
Video: https://www.youtube.com/watch?v=8GtJ1bWHZvg
Split learning for health: Distributed deep learning without sharing raw patient data: https://arxiv.org/pdf/1812.00564.pdf
Distributed learning of deep neural network over multiple agents
https://www.sciencedirect.com/science/article/pii/S1084804518301590
Otkrist Gupta, Ramesh Raskar,
In domains such as health care and finance, shortage of labeled data and computational resources is a critical issue while developing machine learning algorithms. To address the issue of labeled data scarcity in training and deployment of neural network-based systems, we propose a new technique to train deep neural networks over several data sources. Our method allows for deep neural networks to be trained using data from multiple entities in a distributed fashion. We evaluate our algorithm on existing datasets and show that it obtains performance which is similar to a regular neural network trained on a single machine. We further extend it to incorporate semi-supervised learning when training with few labeled samples, and analyze any security concerns that may arise. Our algorithm paves the way for distributed training of deep neural networks in data sensitive applications when raw data may not be shared directly.
What is SIGGRAPH NEXT?
By Juliet Fiss
What will be the next big thing at SIGGRAPH, and how can the SIGGRAPH community contribute in an impactful way to fields outside of traditional computer graphics? SIGGRAPH NEXT at SIGGRAPH 2015 explored these questions. In this new addition to the SIGGRAPH program, an eclectic set of speakers gave TED-style talks and posed grand challenges to the SIGGRAPH community. In this blog post, Professor Ramesh Raskar of the MIT Media Lab introduces SIGGRAPH NEXT and outlines his vision for it.
What will be the next big thing at SIGGRAPH?
The SIGGRAPH community has a set of hammers that it uses to solve problems: geometry processing, rendering, animation, and imaging. What will be the next hammer, the next major field of study, appear at SIGGRAPH? Let’s examine where our research ideas come from. Often, advances in machine learning, optimization, signal processing, and optics forge our hammers. Our selection of hammer also depends on the nails we see. The most common application areas of computer graphics currently include computer-aided design, movies, games, and photography.
We often ask: “Does this work contribute to SIGGRAPH techniques?”
We should also ask, “Does this work contribute SIGGRAPH techniques to _____?”
When we answer the challenges posed by these traditional application areas of computer graphics, we are “drinking our own champagne.” We have made amazing progress in these application areas, and we should celebrate! SIGGRAPH NEXT is about finding new varieties of champagne; for that, we need new varieties of grapes. We should invite others from nontraditional and emerging application areas to enjoy our champagne with us, and they will become part of our community. First, we can expand our work in existing areas like mobile, user interaction, virtual reality, fabrication, and new types of cameras. We can also expand into emerging areas such as healthcare, energy, education, entrepreneurship, materials, tissue fabrication, and social media. What’s next?
Professor Raskar highlights three top areas where we can make an impact. One big take-home message is that many of these applications involve biology: bio is the new digital, and it will affect us ubiquitously.
'Media' is a plural for medium. The medium for impact of digital technologies at MIT Media Lab can be photons, electrons, neurons, atoms, cells, musical notes and more.
Over the last 40 years, computing has moved from processor, network, social and more sensory.
MIT Media Lab works at the intersection of computing and such media for human-centric technologies.
Ramesh Raskar
MIT Media Lab
Ramesh Raskar is an Associate Professor at MIT Media Lab. Ramesh Raskar joined the Media Lab from Mitsubishi Electric Research Laboratories in 2008 as head of the Lab’s Camera Culture research group. His research interests span the fields of computational photography, inverse problems in imaging and human-computer interaction. Recent projects and inventions include transient imaging to look around a corner, a next generation CAT-Scan machine, imperceptible markers for motion capture (Prakash), long distance barcodes (Bokode), touch+hover 3D interaction displays (BiDi screen), low-cost eye care devices (Netra,Catra), new theoretical models to augment light fields (ALF) to represent wave phenomena and algebraic rank constraints for 3D displays(HR3D).
In 2004, Raskar received the TR100 Award from Technology Review, which recognizes top young innovators under the age of 35, and in 2003, the Global Indus Technovator Award, instituted at MIT to recognize the top 20 Indian technology innovators worldwide. In 2009, he was awarded a Sloan Research Fellowship. In 2010, he received the Darpa Young Faculty award. Other awards include Marr Prize honorable mention 2009, LAUNCH Health Innovation Award, presented by NASA, USAID, US State Dept and NIKE, 2010, Vodafone Wireless Innovation Project Award (first place), 2011. He holds over 50 US patents and has received four Mitsubishi Electric Invention Awards. He is currently co-authoring a book on Computational Photography.
2137ad Merindol Colony Interiors where refugee try to build a seemengly norm...luforfor
This are the interiors of the Merindol Colony in 2137ad after the Climate Change Collapse and the Apocalipse Wars. Merindol is a small Colony in the Italian Alps where there are around 4000 humans. The Colony values mainly around meritocracy and selection by effort.
2137ad - Characters that live in Merindol and are at the center of main storiesluforfor
Kurgan is a russian expatriate that is secretly in love with Sonia Contado. Henry is a british soldier that took refuge in Merindol Colony in 2137ad. He is the lover of Sonia Contado.
Explore the multifaceted world of Muntadher Saleh, an Iraqi polymath renowned for his expertise in visual art, writing, design, and pharmacy. This SlideShare delves into his innovative contributions across various disciplines, showcasing his unique ability to blend traditional themes with modern aesthetics. Learn about his impactful artworks, thought-provoking literary pieces, and his vision as a Neo-Pop artist dedicated to raising awareness about Iraq's cultural heritage. Discover why Muntadher Saleh is celebrated as "The Last Polymath" and how his multidisciplinary talents continue to inspire and influence.
Hadj Ounis's most notable work is his sculpture titled "Metamorphosis." This piece showcases Ounis's mastery of form and texture, as he seamlessly combines metal and wood to create a dynamic and visually striking composition. The juxtaposition of the two materials creates a sense of tension and harmony, inviting viewers to contemplate the relationship between nature and industry.
1. Raskar, Camera Culture, MIT Media Lab
Computational Ph t
C t ti l Photography:
h
Epsilon to Coded Imaging
p g g
Camera Culture
Ramesh Raskar
Camera Culture
C C lt
Associate Professor, MIT Media Lab http://raskar.info
2.
3. Tools
for
Visual
Computing
Shado Refracti Reflecti
w ve ve
Fernald, Science [Sept 2006]
4. How can we create an entirely
new class of imaging platforms
that have an understanding of the world that far exceeds
human ability
and produce meaningful abstractions that are well
within h
ithi human comprehensibility ?
h ibilit
Ramesh Raskar http://raskar.info
5. Raskar 2006
Computational Illumination Augmented Reality
Mitsubishi Electric Research Laboratories Spatial
Planar Non planar
Non-planar Curved Objects Pocket Proj
Pocket-Proj
1998 1997 2002 2002
Single
Projector
Use
r:T j
?
Projector
1998 1998 2002 1999 2003
Multiple
Projectors
Computational Camera and Photography
8. Short
Sh t Traditional
T diti l MURA
Exposure Shutter
Captured
Single
Photo
Ph t
Deblurred
Result
Banding Artifacts and
Dark
some spatial frequencies
and noisy
d i
are lost
9. Blurring == Convolution
Fourier
Transform
Sharp
Sh Blurred
Bl d
Photo PSF == Sinc Function Photo
ω
Traditional Camera: Shutter is OPEN: Box Filter
10. Fourier
Transform
Sharp
Sh Blurred
Bl d
Photo PSF == Broadband Function Photo
Preserves High Spatial
Frequencies
Flutter Shutter: Shutter is OPEN and CLOSED
11. Flutter Shutter Camera
Raskar, Agrawal, Tumblin [Siggraph2006]
LCD opacity switched
in coded sequence
12. Traditio Coded
nal Exposu
re
Deblurred Deblurred
Image
I Image
I
Image of
Static
Object
26. Mask Sensor
Mask
?
Sensor
Mask
Full Resolution Digital 4D Light Field from
Refocusing: 2D Photo:
Coded Aperture Camera Heterodyne Light Field
d h ld
Camera
40. Glare = low frequency noise in 2D
•But is high frequency noise in 4D
•Remove via simple outlier rejection
Sensor
i
j
u x
41. Rays = Waves for Propagation and Interface
Fresnel propagation Chirp (Lens) Fourier transform Fractional Fourier transform
x1 x2 x3 x4
x0
b
¡ a x0
u1 u2 u3
b u4
-ax
b 0
x0 x1 x0 x2 x3 -bx
a 0 x4
x0
- a
-x
a
0
I
x4
42. Imaging via volume hologram (Depth-specific Imaging)
KVH ( x =0, u =θ /λ; x , u )
4 4 s 3 3
-20
u4
1
u3
-15
0.8
-10
0.6
-5
L
u3 [mm-1]
0 0.4 x3 x4
5
0.2
10
0
15
20 -0.2
-0.4 -0.2 0 0.2 0.4
x3 [mm]
ZZ ½ µ ¶¾
0 0 µs
dx 0 dx 0 e¡ i 2¼( u 4 x 4 ¡ u 3 x 3 )
K V H (x 4 ; u4 ; x 3 ; u3 ) =3 4
0 0
exp ¡ i 2¼ zf (u3 + u4 ) ¡ u3 + u4 ¡
¸
¸
½ µ 0 0
¶µ 0
¶¾ ½ µ 0 0
¶µ ¶¾
u3 + u4 u4 µs u3 + u4 u0
4 µs
£ sinc L ¸ ¡ u3 + u4 + u4 + ¡ sinc L ¸ ¡ u3 + u4 ¡ u4 ¡ ¡
2 2 ¸ 2 2 ¸
½ ¾ ½ ¾
¼ zf 2 L
Derivation: h(x 2 ; x 1 ) = exp ¡ i 2
(x 1 + x 2 ¡ f µs) sinc (x 1 + x 2 ) (x 2 ¡ f µs)
¸ f ¸f2
K V H I (x 2 ; u2 ; x 1 ; u1 ) Parameters:
¸=05¹
0.5 ¹m
µs= 30°
K V H (x 4 ; u4 ; x 3 ; u3 ) L = 1 mm
zf = 50 mm
44. How can we create an entirely
new class of imaging platforms
that have an understanding of the world that far exceeds
human ability
and produce meaningful abstractions that are well
within h
ithi human comprehensibility ?
h ibilit
Ramesh Raskar http://raskar.info