Associate Professor, MIT Media Lab
Ramesh Raskar is founder of the Camera Culture research group at the Massachusetts Institute of Technology (MIT) Media Lab and associate professor of Media Arts and Sciences at MIT. Raskar is the co-inventor of radical imaging solutions including femto-photography, an ultra-fast imaging camera that can see around corners, low-cost eye-care solutions for the developing world and a camera that allows users to read pages of a book without opening the cover. He is a pioneer in the fields of imaging, computer vision and machine learning.
Raskar’s focus is on building interfaces between social systems and cyber-physical systems. These interfaces span research in physical (e.g., sensors, health-tech), digital (e.g., tools to enable keeping data private in distributed machine learning applications) and global (e.g., geomaps, autonomous mobility) domains. Recent inventions by Raskar’s team include transient imaging to look around a corner, a next-generation CAT-scan machine, imperceptible markers for motion capture, long-distance barcodes, touch + hover 3D interaction displays and new theoretical models to augment light fields to represent wave phenomena.
Raskar has dedicated his career to linking the best of the academic and entrepreneurial worlds with young engineers, igniting a passion for impact inventing. Raskar seeks to catalyze change on a massive scale by launching platforms that empower inventors to create solutions to improve lives globally.
Raskar has received the Lemelson Award, ACM SIGGRAPH Achievement Award, DARPA Young Faculty Award, Alfred P. Sloan Research Fellowship, TR100 Award from MIT Technology Review and Global Indus Technovator Award. He has worked on special research projects at Google [X] and Facebook and co-founded and advised several companies. He holds more than 80 US patents.
Making the Invisible Visible: Within Our Bodies, the World Around Us, and Beyond
What is SIGGRAPH NEXT?
By Juliet Fiss
What will be the next big thing at SIGGRAPH, and how can the SIGGRAPH community contribute in an impactful way to fields outside of traditional computer graphics? SIGGRAPH NEXT at SIGGRAPH 2015 explored these questions. In this new addition to the SIGGRAPH program, an eclectic set of speakers gave TED-style talks and posed grand challenges to the SIGGRAPH community. In this blog post, Professor Ramesh Raskar of the MIT Media Lab introduces SIGGRAPH NEXT and outlines his vision for it.
What will be the next big thing at SIGGRAPH?
The SIGGRAPH community has a set of hammers that it uses to solve problems: geometry processing, rendering, animation, and imaging. What will be the next hammer, the next major field of study, appear at SIGGRAPH? Let’s examine where our research ideas come from. Often, advances in machine learning, optimization, signal processing, and optics forge our hammers. Our selection of hammer also depends on the nails we see. The most common application areas of computer graphics currently include computer-aided design, movies, games, and photography.
We often ask: “Does this work contribute to SIGGRAPH techniques?”
We should also ask, “Does this work contribute SIGGRAPH techniques to _____?”
When we answer the challenges posed by these traditional application areas of computer graphics, we are “drinking our own champagne.” We have made amazing progress in these application areas, and we should celebrate! SIGGRAPH NEXT is about finding new varieties of champagne; for that, we need new varieties of grapes. We should invite others from nontraditional and emerging application areas to enjoy our champagne with us, and they will become part of our community. First, we can expand our work in existing areas like mobile, user interaction, virtual reality, fabrication, and new types of cameras. We can also expand into emerging areas such as healthcare, energy, education, entrepreneurship, materials, tissue fabrication, and social media. What’s next?
Professor Raskar highlights three top areas where we can make an impact. One big take-home message is that many of these applications involve biology: bio is the new digital, and it will affect us ubiquitously.
'Media' is a plural for medium. The medium for impact of digital technologies at MIT Media Lab can be photons, electrons, neurons, atoms, cells, musical notes and more.
Over the last 40 years, computing has moved from processor, network, social and more sensory.
MIT Media Lab works at the intersection of computing and such media for human-centric technologies.
Ramesh Raskar
MIT Media Lab
Ramesh Raskar is an Associate Professor at MIT Media Lab. Ramesh Raskar joined the Media Lab from Mitsubishi Electric Research Laboratories in 2008 as head of the Lab’s Camera Culture research group. His research interests span the fields of computational photography, inverse problems in imaging and human-computer interaction. Recent projects and inventions include transient imaging to look around a corner, a next generation CAT-Scan machine, imperceptible markers for motion capture (Prakash), long distance barcodes (Bokode), touch+hover 3D interaction displays (BiDi screen), low-cost eye care devices (Netra,Catra), new theoretical models to augment light fields (ALF) to represent wave phenomena and algebraic rank constraints for 3D displays(HR3D).
In 2004, Raskar received the TR100 Award from Technology Review, which recognizes top young innovators under the age of 35, and in 2003, the Global Indus Technovator Award, instituted at MIT to recognize the top 20 Indian technology innovators worldwide. In 2009, he was awarded a Sloan Research Fellowship. In 2010, he received the Darpa Young Faculty award. Other awards include Marr Prize honorable mention 2009, LAUNCH Health Innovation Award, presented by NASA, USAID, US State Dept and NIKE, 2010, Vodafone Wireless Innovation Project Award (first place), 2011. He holds over 50 US patents and has received four Mitsubishi Electric Invention Awards. He is currently co-authoring a book on Computational Photography.
Ramesh Raskar (MIT Media Lab): The Unspoken Challenges of AR & VRAugmentedWorldExpo
A talk from the Inspire Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Ramesh Raskar (MIT Media Lab): The Unspoken Challenges of AR & VR
http://AugmentedWorldExpo.com
Associate Professor, MIT Media Lab
Ramesh Raskar is founder of the Camera Culture research group at the Massachusetts Institute of Technology (MIT) Media Lab and associate professor of Media Arts and Sciences at MIT. Raskar is the co-inventor of radical imaging solutions including femto-photography, an ultra-fast imaging camera that can see around corners, low-cost eye-care solutions for the developing world and a camera that allows users to read pages of a book without opening the cover. He is a pioneer in the fields of imaging, computer vision and machine learning.
Raskar’s focus is on building interfaces between social systems and cyber-physical systems. These interfaces span research in physical (e.g., sensors, health-tech), digital (e.g., tools to enable keeping data private in distributed machine learning applications) and global (e.g., geomaps, autonomous mobility) domains. Recent inventions by Raskar’s team include transient imaging to look around a corner, a next-generation CAT-scan machine, imperceptible markers for motion capture, long-distance barcodes, touch + hover 3D interaction displays and new theoretical models to augment light fields to represent wave phenomena.
Raskar has dedicated his career to linking the best of the academic and entrepreneurial worlds with young engineers, igniting a passion for impact inventing. Raskar seeks to catalyze change on a massive scale by launching platforms that empower inventors to create solutions to improve lives globally.
Raskar has received the Lemelson Award, ACM SIGGRAPH Achievement Award, DARPA Young Faculty Award, Alfred P. Sloan Research Fellowship, TR100 Award from MIT Technology Review and Global Indus Technovator Award. He has worked on special research projects at Google [X] and Facebook and co-founded and advised several companies. He holds more than 80 US patents.
Making the Invisible Visible: Within Our Bodies, the World Around Us, and Beyond
What is SIGGRAPH NEXT?
By Juliet Fiss
What will be the next big thing at SIGGRAPH, and how can the SIGGRAPH community contribute in an impactful way to fields outside of traditional computer graphics? SIGGRAPH NEXT at SIGGRAPH 2015 explored these questions. In this new addition to the SIGGRAPH program, an eclectic set of speakers gave TED-style talks and posed grand challenges to the SIGGRAPH community. In this blog post, Professor Ramesh Raskar of the MIT Media Lab introduces SIGGRAPH NEXT and outlines his vision for it.
What will be the next big thing at SIGGRAPH?
The SIGGRAPH community has a set of hammers that it uses to solve problems: geometry processing, rendering, animation, and imaging. What will be the next hammer, the next major field of study, appear at SIGGRAPH? Let’s examine where our research ideas come from. Often, advances in machine learning, optimization, signal processing, and optics forge our hammers. Our selection of hammer also depends on the nails we see. The most common application areas of computer graphics currently include computer-aided design, movies, games, and photography.
We often ask: “Does this work contribute to SIGGRAPH techniques?”
We should also ask, “Does this work contribute SIGGRAPH techniques to _____?”
When we answer the challenges posed by these traditional application areas of computer graphics, we are “drinking our own champagne.” We have made amazing progress in these application areas, and we should celebrate! SIGGRAPH NEXT is about finding new varieties of champagne; for that, we need new varieties of grapes. We should invite others from nontraditional and emerging application areas to enjoy our champagne with us, and they will become part of our community. First, we can expand our work in existing areas like mobile, user interaction, virtual reality, fabrication, and new types of cameras. We can also expand into emerging areas such as healthcare, energy, education, entrepreneurship, materials, tissue fabrication, and social media. What’s next?
Professor Raskar highlights three top areas where we can make an impact. One big take-home message is that many of these applications involve biology: bio is the new digital, and it will affect us ubiquitously.
'Media' is a plural for medium. The medium for impact of digital technologies at MIT Media Lab can be photons, electrons, neurons, atoms, cells, musical notes and more.
Over the last 40 years, computing has moved from processor, network, social and more sensory.
MIT Media Lab works at the intersection of computing and such media for human-centric technologies.
Ramesh Raskar
MIT Media Lab
Ramesh Raskar is an Associate Professor at MIT Media Lab. Ramesh Raskar joined the Media Lab from Mitsubishi Electric Research Laboratories in 2008 as head of the Lab’s Camera Culture research group. His research interests span the fields of computational photography, inverse problems in imaging and human-computer interaction. Recent projects and inventions include transient imaging to look around a corner, a next generation CAT-Scan machine, imperceptible markers for motion capture (Prakash), long distance barcodes (Bokode), touch+hover 3D interaction displays (BiDi screen), low-cost eye care devices (Netra,Catra), new theoretical models to augment light fields (ALF) to represent wave phenomena and algebraic rank constraints for 3D displays(HR3D).
In 2004, Raskar received the TR100 Award from Technology Review, which recognizes top young innovators under the age of 35, and in 2003, the Global Indus Technovator Award, instituted at MIT to recognize the top 20 Indian technology innovators worldwide. In 2009, he was awarded a Sloan Research Fellowship. In 2010, he received the Darpa Young Faculty award. Other awards include Marr Prize honorable mention 2009, LAUNCH Health Innovation Award, presented by NASA, USAID, US State Dept and NIKE, 2010, Vodafone Wireless Innovation Project Award (first place), 2011. He holds over 50 US patents and has received four Mitsubishi Electric Invention Awards. He is currently co-authoring a book on Computational Photography.
Ramesh Raskar (MIT Media Lab): The Unspoken Challenges of AR & VRAugmentedWorldExpo
A talk from the Inspire Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Ramesh Raskar (MIT Media Lab): The Unspoken Challenges of AR & VR
http://AugmentedWorldExpo.com
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-mit-keynote
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Ramesh Raskar, Associate Professor in the MIT Media Lab, presents the "Making the Invisible Visible: Within Our Bodies, the World Around Us and Beyond" tutorial at the May 2019 Embedded Vision Summit. For more information, please see http://cameraculture.media.mit.edu, http://www.media.mit.edu/~raskar and https://professional.mit.edu/programs/short-programs/advances-imaging.
The invention of X-ray imaging enabled us to see inside our bodies. The invention of thermal infrared imaging enabled us to depict heat. So, over the last few centuries, the key to making the invisible visible was recording with new slices of electromagnetic spectrum. But the impossible photos of tomorrow won’t be recorded; they’ll be computed.
Ramesh Raskar’s group has pioneered the field of femto-photography, which uses a high-speed camera that enables visualizing the world at nearly a trillion frames per second so that we can create slow-motion movies of light in flight. These techniques enable the seemingly impossible: seeing around corners, seeing through fog as if it were a sunny day and detecting circulating tumor cells with a device resembling a blood-pressure cuff.
Raskar and his colleagues in the Camera Culture Group at the MIT Media Lab have advanced fundamental techniques and have pioneered new imaging and computer vision applications. Their work centers on the co-design of novel imaging hardware and machine learning algorithms, including techniques for the automated design of deep neural networks. Many of Raskar’s projects address healthcare, such as EyeNetra, a start-up that extends the capabilities of smart phones to enable low-cost eye exams.
In his keynote presentation, Raskar shares highlights of his group’s work, and his unique perspective on the future of imaging, machine learning and computer vision.
On March 11, 2011 Todd Marks presented The Singularity is Here at SXSW Interactive.
The topic of Singularity is heating up as more people discuss what will become of the human race when computers exceed our intelligence. This presentation explores several theories about the future of mankind and points out how the technology leading us there is already HERE.
“The Singularity is Near” is a book and movie written by futurist and prominent Singularitarian, Ray Kurzweil. It is a documentary with a B-line drama where Ray’s digital alter ego Ramona sets off on a quest to pass the Turing Test. Passing this test signifies the day computers can “think”, which came close to occurring a few years ago and is not far off.
Learn what milestones we have already reached toward Singularity and what technologies present and future are leading us there. We will explore Location Based Services, Augmented Reality, Bio-Feedback and Smart Agents. We will analyze current trends in Bio-Technology, Nano-Technology, Computing and Robotics and discuss the possibility of Digital Immortality.
The Internet of Things, Ambient Intelligence, and the Move Towards Intelligen...George Vanecek
With the successful adoption of cloud-based services and the increasing capabilities of smart connected/wireless devices, the software and consumer electronics industries are turning towards innovating solutions within the Internet-of-Things (IoT) to offer consumers (and enterprises) smart solutions that take the dynamics of the real-world into consideration.
The vision is to bring the awareness of what happens in the real-world, how people live and how smart devices operate in the real world into the view and control of the digital world. Here the digital world is the totality of the Internet, the Web, and the private and public cloud services.
In this session, we will look at key technical trends and their increasing interdependency in the areas of real-world Sensing, Perception, Machine Learning, Context-awareness, dynamic Trust Determination, Semantic Web and Artificial Intelligence which are now enabling ambient intelligence and driving the emergence of Intelligence Systems within the Internet of Things. We will also look at the challenges that such interdependencies expose, and the opportunities that their solutions offer to the industry.
Sixth Sense is a gesture-based wearable computer interface system developed at MIT Media Lab by Steve Mann in 1994 and in 1997 head worn gestural interface was developed, and in 1998 neck worn version , was developed and further Pranav Mistry from MIT MEDIA LAB developed hardware and software for head worn and neck worn versions in the year of 2009. It connects the physical world with digital world around us .It consists of hardware components connected wirelessly to the computing peripherals. It uses enabling surfaces, walls and physical object as interface and it reduces the gap of line between physical and digital world. It help us to take right decision which improve our power of knowledge. Goal is to bring part of the physical world to digital world. Girubaa. G | Pavithra. M | Deepak. K "Sixth Sense Technology" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-6 , October 2018, URL: http://www.ijtsrd.com/papers/ijtsrd18638.pdf
IoT 3.0 : Connected Living in an Everything-Digital WorldFahim Kawsar
We are observing a monumental effort from the industry and academia to make everything connected. Naturally, to understand the needs of these connected things, we need a better understanding of humans and where, when, and how they interact. Then we can create digital services and capabilities that fundamentally change the way we experience our lives. Right now IoT is all about connectivity, and scale. Next generation IoT will be about learning and contextual automation. Designing intention- and behavior-aware services will be the principal source of differentiation, and competitive advantage for the industry players.
To this end, in this talk I explore how wearable devices and Wi-Fi network can be used as a sensing platform to understand you and the world around you and to design future consumer facing connected services.
ACM SIGGRAPH is delighted to present the 2017 Computer Graphics Achievement Award to Ramesh Raskar in recognition of his pioneering contributions to the fields of computational photography and light transport and for applying these technologies for social impact.
https://www.siggraph.org/about/awards/2017-cg-achievement-award-ramesh-raskar/
I recently gave a talk at ICCP 2015 and clarified that we should stop working on coded aperture for focus effects! (Thus negating my team's work in this area.). I also spoke about the lost decade of computational photography and how we have wasted too many years working on the wrong problems.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-mit-keynote
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Ramesh Raskar, Associate Professor in the MIT Media Lab, presents the "Making the Invisible Visible: Within Our Bodies, the World Around Us and Beyond" tutorial at the May 2019 Embedded Vision Summit. For more information, please see http://cameraculture.media.mit.edu, http://www.media.mit.edu/~raskar and https://professional.mit.edu/programs/short-programs/advances-imaging.
The invention of X-ray imaging enabled us to see inside our bodies. The invention of thermal infrared imaging enabled us to depict heat. So, over the last few centuries, the key to making the invisible visible was recording with new slices of electromagnetic spectrum. But the impossible photos of tomorrow won’t be recorded; they’ll be computed.
Ramesh Raskar’s group has pioneered the field of femto-photography, which uses a high-speed camera that enables visualizing the world at nearly a trillion frames per second so that we can create slow-motion movies of light in flight. These techniques enable the seemingly impossible: seeing around corners, seeing through fog as if it were a sunny day and detecting circulating tumor cells with a device resembling a blood-pressure cuff.
Raskar and his colleagues in the Camera Culture Group at the MIT Media Lab have advanced fundamental techniques and have pioneered new imaging and computer vision applications. Their work centers on the co-design of novel imaging hardware and machine learning algorithms, including techniques for the automated design of deep neural networks. Many of Raskar’s projects address healthcare, such as EyeNetra, a start-up that extends the capabilities of smart phones to enable low-cost eye exams.
In his keynote presentation, Raskar shares highlights of his group’s work, and his unique perspective on the future of imaging, machine learning and computer vision.
On March 11, 2011 Todd Marks presented The Singularity is Here at SXSW Interactive.
The topic of Singularity is heating up as more people discuss what will become of the human race when computers exceed our intelligence. This presentation explores several theories about the future of mankind and points out how the technology leading us there is already HERE.
“The Singularity is Near” is a book and movie written by futurist and prominent Singularitarian, Ray Kurzweil. It is a documentary with a B-line drama where Ray’s digital alter ego Ramona sets off on a quest to pass the Turing Test. Passing this test signifies the day computers can “think”, which came close to occurring a few years ago and is not far off.
Learn what milestones we have already reached toward Singularity and what technologies present and future are leading us there. We will explore Location Based Services, Augmented Reality, Bio-Feedback and Smart Agents. We will analyze current trends in Bio-Technology, Nano-Technology, Computing and Robotics and discuss the possibility of Digital Immortality.
The Internet of Things, Ambient Intelligence, and the Move Towards Intelligen...George Vanecek
With the successful adoption of cloud-based services and the increasing capabilities of smart connected/wireless devices, the software and consumer electronics industries are turning towards innovating solutions within the Internet-of-Things (IoT) to offer consumers (and enterprises) smart solutions that take the dynamics of the real-world into consideration.
The vision is to bring the awareness of what happens in the real-world, how people live and how smart devices operate in the real world into the view and control of the digital world. Here the digital world is the totality of the Internet, the Web, and the private and public cloud services.
In this session, we will look at key technical trends and their increasing interdependency in the areas of real-world Sensing, Perception, Machine Learning, Context-awareness, dynamic Trust Determination, Semantic Web and Artificial Intelligence which are now enabling ambient intelligence and driving the emergence of Intelligence Systems within the Internet of Things. We will also look at the challenges that such interdependencies expose, and the opportunities that their solutions offer to the industry.
Sixth Sense is a gesture-based wearable computer interface system developed at MIT Media Lab by Steve Mann in 1994 and in 1997 head worn gestural interface was developed, and in 1998 neck worn version , was developed and further Pranav Mistry from MIT MEDIA LAB developed hardware and software for head worn and neck worn versions in the year of 2009. It connects the physical world with digital world around us .It consists of hardware components connected wirelessly to the computing peripherals. It uses enabling surfaces, walls and physical object as interface and it reduces the gap of line between physical and digital world. It help us to take right decision which improve our power of knowledge. Goal is to bring part of the physical world to digital world. Girubaa. G | Pavithra. M | Deepak. K "Sixth Sense Technology" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-6 , October 2018, URL: http://www.ijtsrd.com/papers/ijtsrd18638.pdf
IoT 3.0 : Connected Living in an Everything-Digital WorldFahim Kawsar
We are observing a monumental effort from the industry and academia to make everything connected. Naturally, to understand the needs of these connected things, we need a better understanding of humans and where, when, and how they interact. Then we can create digital services and capabilities that fundamentally change the way we experience our lives. Right now IoT is all about connectivity, and scale. Next generation IoT will be about learning and contextual automation. Designing intention- and behavior-aware services will be the principal source of differentiation, and competitive advantage for the industry players.
To this end, in this talk I explore how wearable devices and Wi-Fi network can be used as a sensing platform to understand you and the world around you and to design future consumer facing connected services.
ACM SIGGRAPH is delighted to present the 2017 Computer Graphics Achievement Award to Ramesh Raskar in recognition of his pioneering contributions to the fields of computational photography and light transport and for applying these technologies for social impact.
https://www.siggraph.org/about/awards/2017-cg-achievement-award-ramesh-raskar/
I recently gave a talk at ICCP 2015 and clarified that we should stop working on coded aperture for focus effects! (Thus negating my team's work in this area.). I also spoke about the lost decade of computational photography and how we have wasted too many years working on the wrong problems.
The way back to normal starts here
We all want to get out of the house. To reopen the economy. To feel secure again. Safe Paths builds tools that help communities flatten the curve of COVID-19 — together. CovidSafePaths.org
Video of the talk at https://www.youtube.com/watch?v=x9TCYuMUnco
Friction in data sharing is a large challenge for large scale machine learning. Emerging technologies in domains such as biomedicine, health and finance benefit from distributed deep learning methods which can allow multiple entities to train a deep neural network without requiring data sharing or resource aggregation at one single place. The talk will explore the main challenges in data friction that make capture, analysis and deployment of ML. The challenges include siloed and unstructured data, privacy and regulation of data sharing and incentive models for data transparent ecosystems. The talk will compare distributed deep learning methods of federated learning and split learning. Our team at MIT has pioneered a range of approaches including automated machine learning (AutoML), privacy preserving machine learning (PrivateML) and intrinsic as well as extrinsic data valuation (Data Markets). One of the programs at MIT aims to create a standard for data transparent ecosystems that can simultaneously address the privacy and utility of data.
Bio: Ramesh Raskar is an Associate Professor at MIT Media Lab and directs the Camera Culture research group. His focus is on AI and Imaging for health and sustainability. They span research in physical (e.g., sensors, health-tech), digital (e.g., automated and privacy-aware machine learning) and global (e.g., geomaps, autonomous mobility) domains. He received the Lemelson Award (2016), ACM SIGGRAPH Achievement Award (2017), DARPA Young Faculty Award (2009), Alfred P. Sloan Research Fellowship (2009), TR100 Award from MIT Technology Review (2004) and Global Industry Technovator Award (2003). He has worked on special research projects at Google [X], Apple Privacy Team and Facebook and co-founded/advised several companies. Project page https://splitlearning.github.io/" Ramesh Raskar is an Associate Professor at MIT Media Lab and directs the Camera Culture research group. His focus is on Machine Learning and Imaging for health and sustainability. They span research in physical (e.g., sensors, health-tech), digital (e.g., automated and privacy-aware machine learning) and global (e.g., geomaps, autonomous mobility) domains.
In his recent role at Facebook, he launched and led innovation teams in Digital Health, Health-tech, Satellite Imaging, TV and Bluetooth bandwidth for Connectivity, VR/AR and ‘Emerging Worlds’ initiative for FB.
At MIT, his co-inventions include camera to see around corners, femto-photography, automated machine learning (auto-ML), private ML, low-cost eye care devices (Netra,Catra, EyeSelfie), a novel CAT-Scan machine, motion capture (Prakash), long distance barcodes (Bokode), 3D interaction displays (BiDi screen), new theoretical models to augment light fields (ALF) to represent wave phenomena and algebraic rank constraints for 3D displays(HR3D).
Video: https://www.youtube.com/watch?v=2jq_5FaQbTg
After different rejections, the project of a lifetime Ramesh Raskar (associate professor at MIT) finally comes to life.
How did he manage to get his way out of this jungle of misleading signs and career traps? By becoming a pathfinder: always tense towards your goal but also critical and ready to adjust his strategy to reach it.
An incredible life lesson that he gave us in this talk at the last FAIL at Massachusetts Institute of Technology (MIT).
https://www.youtube.com/watch?v=2jq_5FaQbTg&feature=youtu.be&fbclid=IwAR3aAo7SIiCuHY_6ICTjXLOpNBUBwEEJUq72pD-V8N2nX2cWaVIxtPM1gBM
Ramesh Raskar is an Associate Professor at MIT Media Lab and directs the Camera Culture research group. His focus is on AI and Imaging for health and sustainability. These interfaces span research in physical (e.g., sensors, health-tech), digital (e.g., automating machine learning) and global (e.g., geomaps, autonomous mobility) domains. He received the Lemelson Award (2016), ACM SIGGRAPH Achievement Award (2017), DARPA Young Faculty Award (2009), Alfred P. Sloan Research Fellowship (2009), TR100 Award from MIT Technology Review (2004) and Global Indus Technovator Award (2003). He has worked on special research projects at Google [X] and Facebook and co-founded/advised several companies.
http://raskar.info or CameraCulture Wiki Page
How to come up w ideas: Idea Hexagon
How to write a paper
How to give a talk
Open research problems
How to decide merit of a project
How to attend a conference, brainstorm
Strive for Five
Before 5 teams
Be early, let others do details
Beyond 5 years
What no one is thinking about
Within 5 steps of Human Impact
Relevance
Beyond 5 mins of instruction
Deep, iterative, participatory
Fusing 5+ Expertise
Fun, barrier for others
We need to transition from analysis to synthesis when it comes to large scale image based studies of satellite or street level images.
Large scale, image based studies have the ability to unlock the human potential and really address some of the most important societal problems. The question really is, are we going to do that through analysis or are we going to step up to the game and actually start doing synthesis? Are we only go to study and observations or are we going to go and actually make an impact in the society?
Can global image repositories help UN's sustainable development goals (SDGs)? help us understand the social determinants of health? Satellite imagery, Google street view and user contributed photos from a global image repository are being used for large scale image-based studies, visual census and sentiment analysis [Ermon][http://StreetScore.media.mit.edu]. But we need to go beyond simply relying on big data for investigating social questions via remote analysis. We need to transition from analysis to synthesis. For deployable social solutions, we need to consider the full stack of physical devices, organizational interests and sector-specific resources.
Image-based large studies allow us to predict poverty from daytime and nighttime satellite imagery which can influence critical decisions for aid and development planning. In project ‘StreetScore’, our group has shown that semantic analysis of street level imagery such as Google Streetview, can provide varied insights rich in urban perception; our recent project ‘StreetChange’ shows the benefits of time-series data in driving these insights (http://streetchange.media.mit.edu).
We have seen some amazing work and you'll hear from Stephano about poverty mapping my glove previous collaborators to a population density crop maps, Betaine. So we had been, that's been fantastic progress in, in using a global industry, uh, in, in these areas that are taken from satellites or drones and then a street level imagery is also very widely available, either very structured like Google street view, but also from a user contributor photos and to that Nikki like and others in my group have been working on can we do a sentiment analysis of, of this imagery in this case, sentiment analysis of the perceived safety just for Google Street and main street and then create kind of citywide maps of a perceived safety that can be used by city planners and urban planners. So, which is great. But coming back to analysis versus synthesis opportunities, I'm going to give you a flavor of one of the projects we worked on a which is street addresses.
Project page: https://splitlearning.github.io/
Papers: https://arxiv.org/search/cs?searchtype=author&query=Raskar
Video: https://www.youtube.com/watch?v=8GtJ1bWHZvg
Split learning for health: Distributed deep learning without sharing raw patient data: https://arxiv.org/pdf/1812.00564.pdf
Distributed learning of deep neural network over multiple agents
https://www.sciencedirect.com/science/article/pii/S1084804518301590
Otkrist Gupta, Ramesh Raskar,
In domains such as health care and finance, shortage of labeled data and computational resources is a critical issue while developing machine learning algorithms. To address the issue of labeled data scarcity in training and deployment of neural network-based systems, we propose a new technique to train deep neural networks over several data sources. Our method allows for deep neural networks to be trained using data from multiple entities in a distributed fashion. We evaluate our algorithm on existing datasets and show that it obtains performance which is similar to a regular neural network trained on a single machine. We further extend it to incorporate semi-supervised learning when training with few labeled samples, and analyze any security concerns that may arise. Our algorithm paves the way for distributed training of deep neural networks in data sensitive applications when raw data may not be shared directly.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
8. Raskar | MIT Media Lab
Multi-ModeMulti-Format
IoT
Perception
Ergo,
Hygiene
Ethics
Gestural Language
9. Raskar | MIT Media Lab
Challenges in AR
World Input Process Output World
Camera,
Location
Sensors
Identify,
Track,
Abstract
Display,
3D Overlay
Interaction
MultiModal MultiFormat
IoT
Training
DataSets
Perception
Ergo, Hygiene
Ethics
Gestural Language
EyeGlasses
Laziness
Authoring
IoT
10. Raskar | MIT Media Lab
World Input Process Output World
Camera,
Location
Sensors
Identify,
Track,
Abstract
Display,
3D Overlay
Interaction
MultiFormat
IoT
Training
DataSets
Ergo, Hygiene
Ethics
Gestural Language
EyeGlasses
Laziness
Authoring
IoT
Challenges in AR
MultiModal
Perception
11. Raskar | MIT Media Lab
South Park - Jurassic Park
12. Raskar | MIT Media Lab
Photoreal Functional Realism
NonPhotoRealistic Camera, Siggraph 2004
http://Raskar.info/NprCamera
13. Raskar | MIT Media Lab
Realism
Functional
Photoreal
Physical
Non-visual
Ferwerda 2003
14. Raskar | MIT Media Lab
Realism
Functional
Photoreal
Physical
Non-visual
Archaic neurosc +
perception
Few open datasets
21. Raskar | MIT Media Lab
Guerilla Marketing
FHM Magazine
22. Raskar | MIT Media Lab
Augmented Reality (AR)
World Input Process Output World
Camera,
Location
Sensors
Identify,
Track,
Abstract
Display,
3D Overlay
Interaction
MultiModal MultiFormat
IoT
Training
DataSets
Perception
Ergo, Hygiene
Ethics
Gestural Language
EyeGlasses
Laziness
Authoring
IoT
23.
24.
25.
26. Raskar | MIT Media Lab
“People are lazy”
Rob Lindeman, Benjamin Lok, Bill Baxter
Cognitive or Physical Exertion
Fatigue = Reduced Usage
27. Raskar | MIT Media Lab
Gesture Lexicon
Phone AR: No gesture
No Controllers
Natural Objects
Open Data Sets
28. Raskar | MIT Media Lab
Augmented Reality (AR)
World Input Process Output World
Camera,
Location
Sensors
Identify,
Track,
Abstract
Display,
3D Overlay
Interaction
MultiModal MultiFormat
IoT
Training
DataSets
Perception
Ergo, Hygiene
Ethics
Gestural Language
EyeGlasses
Laziness
Authoring
IoT
31. Raskar | MIT Media Lab
Mk Haley
Disney Imagineering
“My peers working with VR get frustrated with
women "ruining" the gear
because of mascara, foundation, or moisturizer on their face. Or
women being "difficult"
if they have ponytails or other hairstyles..
32. Raskar | MIT Media Lab
‘Glassholes’ Ethics
Info Asymmetry
Visual filter bubbles
‘Turn off sensors’
Eye tracking
33. Raskar | MIT Media Lab
Multi-ModeMulti-Format
IoT
Perception
Ergo,
Hygiene
Ethics
Gestural Language
35. Raskar | MIT Media Lab
Poor Man’s Palace
Augment the world, projectors and proxy geometry, controlling light every millimeter at every millisecond.
37. Raskar | MIT Media Lab
Acknowledgements
• MERL
• Amit Agrawal, Ashok Veeraraghavan,
Ankit Mohan, Jack Tumblin, Jeroen van
Baar, Paul Beardsley, Remo Ziegler,
Thomas Willwacher, Srinivas Rao, Cliff
Forlines, Paul Dietz, Joe Marks, Darren
Leigh
• Office of the Future grp UNC Chapel Hill
• Greg Welch, Kok-lim Low, Deepak
B’padhyay, Aditi Majumder,
Michael Brown, Ruigang Yang
• Henry Fuchs, Herman Towles
• Wei-chao Chen
• Camera Culture Group, MIT
Media Lab
• John Werner
• Marco Jacobs
• Mark Billinghurst, U S Australia
• Sameer Rawal, DISQ, India
• Tim Smith, Sevaleader
• Alvaro Cassinelli, U of Tokyo
• Benjamin Lok, U of Florida
• Bill Baxter
• Rob Lindenman,
• Brett Jones
• Mk Haley, Disney
38. Raskar | MIT Media Lab
Challenges in AR
World Input Process Output World
Camera,
Location
Sensors
Identify,
Track,
Abstract
Display,
3D Overlay
Interaction
MultiModal MultiFormat
IoT
Training
DataSets
Perception Ergo, Hygiene
Ethics
Gestural Language
EyeGlasses
Laziness
Authoring
IoT
Free Book: raskar.info/book
Slideshare.net/cameraculture