Temporal Frequency Probing for 5D Transient Analysis of Global Light TransportMatthew O'Toole
Temporal Frequency Probing for 5D Transient Analysis of Global Light Transport
Matthew O'Toole, Felix Heide, Lei Xiao, Matthias B. Hullin, Wolfgang Heidrich, and Kiriakos N. Kutulakos. ACM SIGGRAPH, 2014.
Abstract:
We analyze light propagation in an unknown scene using projectors and cameras that operate at transient timescales. In this new photography regime, the projector emits a spatio-temporal 3D signal and the camera receives a transformed version of it, determined by the set of all light transport paths through the scene and the time delays they induce. The underlying 3D-to-3D transformation encodes scene geometry and global transport in great detail, but individual transport components (e.g., direct reflections, inter-reflections, caustics, etc.) are coupled nontrivially in both space and time. To overcome this complexity, we observe that transient light transport is always separable in the temporal frequency domain. This makes it possible to analyze transient transport one temporal frequency at a time by trivially adapting techniques from conventional projector-to-camera transport. We use this idea in a prototype that offers three never-seen-before abilities: (1) acquiring time-of-flight depth images that are robust to general indirect transport, such as inter-reflections and caustics; (2) distinguishing between direct views of objects and their mirror reflection; and (3) using a photonic mixer device to capture sharp, evolving wavefronts of "light-in-flight".
Primal-Dual Coding to Probe Light Transport
Matthew O'Toole, Ramesh Raskar, and Kiriakos N. Kutulakos. ACM SIGGRAPH, 2012.
Abstract:
We present primal-dual coding, a photography technique that enables direct fine-grain control over which light paths contribute to a photo. We achieve this by projecting a sequence of patterns onto the scene while the sensor is exposed to light. At the same time, a second sequence of patterns, derived from the first and applied in lockstep, modulates the light received at individual sensor pixels. We show that photography in this regime is equivalent to a matrix probing operation in which the elements of the scene's transport matrix are individually re-scaled and then mapped to the photo. This makes it possible to directly acquire photos in which specific light transport paths have been blocked, attenuated or enhanced. We show captured photos for several scenes with challenging light transport effects, including specular inter-reflections, caustics, diffuse inter-reflections and volumetric scattering. A key feature of primal-dual coding is that it operates almost exclusively in the optical domain: our results consist of directly-acquired, unprocessed RAW photos or differences between them.
Data-Driven Design. You’ve got the data, so, now what? - Aaron Huang - KontagentSociality Rocks!
Now that you have the data, what's the plan? Using customer data to understand and optimize your social or mobile game can produce huge returns. But, there are also dangers of relying too heavily on data without the proper level of controls, data science and overall process. Fortunately, there are now tools, technology and talent available in the market that are enabling forces for studios who want to be more data-driven. This session will analyze what it takes to become a data-driven organization, and look at some lessons learned from our experience working with some of the top grossing social and mobile game studios.
Temporal Frequency Probing for 5D Transient Analysis of Global Light TransportMatthew O'Toole
Temporal Frequency Probing for 5D Transient Analysis of Global Light Transport
Matthew O'Toole, Felix Heide, Lei Xiao, Matthias B. Hullin, Wolfgang Heidrich, and Kiriakos N. Kutulakos. ACM SIGGRAPH, 2014.
Abstract:
We analyze light propagation in an unknown scene using projectors and cameras that operate at transient timescales. In this new photography regime, the projector emits a spatio-temporal 3D signal and the camera receives a transformed version of it, determined by the set of all light transport paths through the scene and the time delays they induce. The underlying 3D-to-3D transformation encodes scene geometry and global transport in great detail, but individual transport components (e.g., direct reflections, inter-reflections, caustics, etc.) are coupled nontrivially in both space and time. To overcome this complexity, we observe that transient light transport is always separable in the temporal frequency domain. This makes it possible to analyze transient transport one temporal frequency at a time by trivially adapting techniques from conventional projector-to-camera transport. We use this idea in a prototype that offers three never-seen-before abilities: (1) acquiring time-of-flight depth images that are robust to general indirect transport, such as inter-reflections and caustics; (2) distinguishing between direct views of objects and their mirror reflection; and (3) using a photonic mixer device to capture sharp, evolving wavefronts of "light-in-flight".
Primal-Dual Coding to Probe Light Transport
Matthew O'Toole, Ramesh Raskar, and Kiriakos N. Kutulakos. ACM SIGGRAPH, 2012.
Abstract:
We present primal-dual coding, a photography technique that enables direct fine-grain control over which light paths contribute to a photo. We achieve this by projecting a sequence of patterns onto the scene while the sensor is exposed to light. At the same time, a second sequence of patterns, derived from the first and applied in lockstep, modulates the light received at individual sensor pixels. We show that photography in this regime is equivalent to a matrix probing operation in which the elements of the scene's transport matrix are individually re-scaled and then mapped to the photo. This makes it possible to directly acquire photos in which specific light transport paths have been blocked, attenuated or enhanced. We show captured photos for several scenes with challenging light transport effects, including specular inter-reflections, caustics, diffuse inter-reflections and volumetric scattering. A key feature of primal-dual coding is that it operates almost exclusively in the optical domain: our results consist of directly-acquired, unprocessed RAW photos or differences between them.
Data-Driven Design. You’ve got the data, so, now what? - Aaron Huang - KontagentSociality Rocks!
Now that you have the data, what's the plan? Using customer data to understand and optimize your social or mobile game can produce huge returns. But, there are also dangers of relying too heavily on data without the proper level of controls, data science and overall process. Fortunately, there are now tools, technology and talent available in the market that are enabling forces for studios who want to be more data-driven. This session will analyze what it takes to become a data-driven organization, and look at some lessons learned from our experience working with some of the top grossing social and mobile game studios.
3D processing and metadata ingestion at POLIMI, Gabriele Guidi, Sara Gonizzi ...3D ICONS Project
3D processing and metadata ingestion at POLIMI, Presentation given by Gabriele Guidi, Sara Gonizzi Barsanti and Laura Loredana Micoli at the 3D ICONS workshop at the XVIII Borso Mediterranea del Turismo Archeologico conference in Paestrum.
The presentation describes the 3D digitisation carried out by Politecnico di Milano (POLIMI0 as part of the 3D ICONS project.
Standards and projects of SC 24/WG 9 on Metaverse and InterverseKurata Takeshi
The relationship between the metaverse and the universe corresponds to the one between the virtual environment and the real environment in MAR (Mixed and Augmented reality). The concept of fusing multiple metaverses with the universe is sometimes referred to as the interverse, although this is not yet generally recognized. This presentation will provide an overview of the standards related to interverse developed by SC 24/WG9 (MAR Continuum Concepts and Reference Model) and the ongoing projects. In addition, VRM, an open 3D humanoid avatar format for the metaverse, will be presented, although it is not yet a de jure standard.
ISO/IEC Workshop on Standards for the Metaverse
Session 1: ISO/IEC JTC 1 Standards and Projects for the Metaverse, June 26, 2023, at 21:00-22:50 UTC
Fast and Noise Robust Depth from Focus using Ring Difference Filter with Your...NAVER Engineering
발표자: 서재흥 (KIST 연구원)
발표일: 17.9.
He is currently a researcher at the Center for Human-centered Interaction for Coexistence (CHIC), located in KIST.
개요:
Depth from focus (DfF) is a method of estimating depth of a scene by using the information acquired through the change of the focus of a camera.
Within the framework of DfF, the focus measure (FM) forms the foundation on which the accuracy of the output is determined. With the result from the FM, the role of a DfF pipeline is to determine and recalculate unreliable measurements while enhancing those that are reliable.
In this paper, we propose a new FM that more accurately and robustly measures focus, which we call the "ring difference filter" (RDF). FMs can usually be categorized as confident local methods or noise robust non-local methods. RDF’s unique ring-and-disk structure allows it to have the advantageous sides of both local and non-local FMs. We then describe an efficient pipeline that utilizes the properties that the RDF brings.
Our method is able to reproduce results that are on par with or even better than those of the state-of-the-art, while spending less time in computation.
Skyline, azimuth and altitude tools for archaeoastronomyVictor Reijs
An overview of tools to help archaeoastronomy in the field and at the desk.
The link (on slide 10) to download the HeyWhatsThat skyline csv file needs to be changed to: http://www.heywhatsthat.com/api/horizon.csv?id=<HWT-ID>&resolution=.125&keep=1
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/a-computer-vision-system-for-autonomous-satellite-maneuvering-a-presentation-from-scout-space/
Andrew Harris, Spacecraft Systems Engineer at SCOUT Space, presents the “Developing a Computer Vision System for Autonomous Satellite Maneuvering” tutorial at the May 2023 Embedded Vision Summit.
Computer vision systems for mobile autonomous machines experience a wide variety of real-world conditions and inputs that can be challenging to capture accurately in training datasets. Few autonomous systems experience more challenging conditions than those in orbit. In this talk, Harris describes how SCOUT Space has designed and trained satellite vision systems using dynamic and physically informed synthetic image datasets.
Harris describes how his company generates synthetic data for this challenging environment and how it leverages new real-world data to improve our datasets. In particular, he explains how these synthetic datasets account for and can replicate real sources of noise and error in the orbital environment, and how his company supplements them with in-space data from the first SCOUT-Vision system, which has been in orbit since 2021.
Pleiades - satellite imagery - very high resolutionSpot Image
With the Pleiades constellation, comprising the Pleiades-1 and Pleiades-2 satellites, Spot Image is set to bring you satellite imagery at a resolution of 50 cm and with a footprint of 20 km x 20 km.
More information on http://www.spotimage.com/pleiades
Lecture prepared by Mark Billinghurst on Augmented Reality tracking. Taught on October 18th 2016 by Dr. Gun Lee as part of the COMP 4010 VR class at the University of South Australia.
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 1)Matthew O'Toole
Recent advances in both computational photography and displays have given rise to a new generation of computational devices. Computational cameras and displays provide a visual experience that goes beyond the capabilities of traditional systems by adding computational power to optics, lights, and sensors. These devices are breaking new ground in the consumer market, including lightfield cameras that redefine our understanding of pictures (Lytro), displays for visualizing 3D/4D content without special eyewear (Nintendo 3DS), motion-sensing devices that use light coded in space or time to detect motion and position (Kinect, Leap Motion), and a movement toward ubiquitous computing with wearable cameras and displays (Google Glass).
This short (1.5 hour) course serves as an introduction to the key ideas and an overview of the latest work in computational cameras, displays, and light transport.
Lecture 4 in the 2022 COMP 4010 lecture series on AR/VR. This lecture is about AR Interaction techniques. This was taught by Mark Billinghurst at the University of South Australia in 2022.
Lecture 4 from the COMP 4010 course on AR/VR. This lecture reviews optical tracking for AR and starts discussion about interaction techniques. This was taught by Mark Billinghurst at the University of South Australia on August 17th 2021.
3D processing and metadata ingestion at POLIMI, Gabriele Guidi, Sara Gonizzi ...3D ICONS Project
3D processing and metadata ingestion at POLIMI, Presentation given by Gabriele Guidi, Sara Gonizzi Barsanti and Laura Loredana Micoli at the 3D ICONS workshop at the XVIII Borso Mediterranea del Turismo Archeologico conference in Paestrum.
The presentation describes the 3D digitisation carried out by Politecnico di Milano (POLIMI0 as part of the 3D ICONS project.
Standards and projects of SC 24/WG 9 on Metaverse and InterverseKurata Takeshi
The relationship between the metaverse and the universe corresponds to the one between the virtual environment and the real environment in MAR (Mixed and Augmented reality). The concept of fusing multiple metaverses with the universe is sometimes referred to as the interverse, although this is not yet generally recognized. This presentation will provide an overview of the standards related to interverse developed by SC 24/WG9 (MAR Continuum Concepts and Reference Model) and the ongoing projects. In addition, VRM, an open 3D humanoid avatar format for the metaverse, will be presented, although it is not yet a de jure standard.
ISO/IEC Workshop on Standards for the Metaverse
Session 1: ISO/IEC JTC 1 Standards and Projects for the Metaverse, June 26, 2023, at 21:00-22:50 UTC
Fast and Noise Robust Depth from Focus using Ring Difference Filter with Your...NAVER Engineering
발표자: 서재흥 (KIST 연구원)
발표일: 17.9.
He is currently a researcher at the Center for Human-centered Interaction for Coexistence (CHIC), located in KIST.
개요:
Depth from focus (DfF) is a method of estimating depth of a scene by using the information acquired through the change of the focus of a camera.
Within the framework of DfF, the focus measure (FM) forms the foundation on which the accuracy of the output is determined. With the result from the FM, the role of a DfF pipeline is to determine and recalculate unreliable measurements while enhancing those that are reliable.
In this paper, we propose a new FM that more accurately and robustly measures focus, which we call the "ring difference filter" (RDF). FMs can usually be categorized as confident local methods or noise robust non-local methods. RDF’s unique ring-and-disk structure allows it to have the advantageous sides of both local and non-local FMs. We then describe an efficient pipeline that utilizes the properties that the RDF brings.
Our method is able to reproduce results that are on par with or even better than those of the state-of-the-art, while spending less time in computation.
Skyline, azimuth and altitude tools for archaeoastronomyVictor Reijs
An overview of tools to help archaeoastronomy in the field and at the desk.
The link (on slide 10) to download the HeyWhatsThat skyline csv file needs to be changed to: http://www.heywhatsthat.com/api/horizon.csv?id=<HWT-ID>&resolution=.125&keep=1
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/10/a-computer-vision-system-for-autonomous-satellite-maneuvering-a-presentation-from-scout-space/
Andrew Harris, Spacecraft Systems Engineer at SCOUT Space, presents the “Developing a Computer Vision System for Autonomous Satellite Maneuvering” tutorial at the May 2023 Embedded Vision Summit.
Computer vision systems for mobile autonomous machines experience a wide variety of real-world conditions and inputs that can be challenging to capture accurately in training datasets. Few autonomous systems experience more challenging conditions than those in orbit. In this talk, Harris describes how SCOUT Space has designed and trained satellite vision systems using dynamic and physically informed synthetic image datasets.
Harris describes how his company generates synthetic data for this challenging environment and how it leverages new real-world data to improve our datasets. In particular, he explains how these synthetic datasets account for and can replicate real sources of noise and error in the orbital environment, and how his company supplements them with in-space data from the first SCOUT-Vision system, which has been in orbit since 2021.
Pleiades - satellite imagery - very high resolutionSpot Image
With the Pleiades constellation, comprising the Pleiades-1 and Pleiades-2 satellites, Spot Image is set to bring you satellite imagery at a resolution of 50 cm and with a footprint of 20 km x 20 km.
More information on http://www.spotimage.com/pleiades
Lecture prepared by Mark Billinghurst on Augmented Reality tracking. Taught on October 18th 2016 by Dr. Gun Lee as part of the COMP 4010 VR class at the University of South Australia.
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 1)Matthew O'Toole
Recent advances in both computational photography and displays have given rise to a new generation of computational devices. Computational cameras and displays provide a visual experience that goes beyond the capabilities of traditional systems by adding computational power to optics, lights, and sensors. These devices are breaking new ground in the consumer market, including lightfield cameras that redefine our understanding of pictures (Lytro), displays for visualizing 3D/4D content without special eyewear (Nintendo 3DS), motion-sensing devices that use light coded in space or time to detect motion and position (Kinect, Leap Motion), and a movement toward ubiquitous computing with wearable cameras and displays (Google Glass).
This short (1.5 hour) course serves as an introduction to the key ideas and an overview of the latest work in computational cameras, displays, and light transport.
Lecture 4 in the 2022 COMP 4010 lecture series on AR/VR. This lecture is about AR Interaction techniques. This was taught by Mark Billinghurst at the University of South Australia in 2022.
Lecture 4 from the COMP 4010 course on AR/VR. This lecture reviews optical tracking for AR and starts discussion about interaction techniques. This was taught by Mark Billinghurst at the University of South Australia on August 17th 2021.
Similar to Realistic Object Appearance using Bidirectional Texture Functions (20)
J. Kaminsky, D. Pitzalis and F. Niccolucc
FOCUS K3D Conference on Semantic 3D Media and Content, February 11-12, 2010, INRIA Sophia Antipolis - Méditerranée, France
More from Fraunhofer IGD, Competence Center "Interactive Engineering Technologies" (7)
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Realistic Object Appearance using Bidirectional Texture Functions
1. Institute of Computer Science II
Computer Graphics
Realistic Object Appearance using
Bidirectional Texture Functions
Christopher Schwartz, Reinhard Klein
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz1 20/10/2011
3. Object Appearance
• Common presentation: Geometry (+ Texture)
• … but is this sufficient?
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz3 20/10/2011
Geometry only With texture Correct appearance
4. Importance of Surface Appearance
• Same shape, different materials
• Different „look-and-feel“
• Important hints about the object
• Misleading vs. better understanding
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz4 20/10/2011
5. Object Appearance
• Impression of reflection of
incident light
• Influenced by features on
different scales
• Macroscopic
• Mesoscopic
• Microscopic
• Viewpoint and
Illumination dependent
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz5 20/10/2011
6. Form of Representation
Macroscopic scale
•3D shape
•Explicit representation
(e.g. polygon mesh)
Mesoscopic scale
•Individually resolved by
human perception
•Statistical representation
not accurate
•Explicit representation
too costly
Microscopic scale
•Alignment of microscopic
structures
•Statistical representation
(e.g. BRDF)
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz6 20/10/2011
7. Bidirectional Reflectance Distribution Function
• Opaque, uniform
Materials
• No texture!
• Ratio of incident
irradiance to outgoing
radiance
• Defined over local
hemisphere
• Depends on
• Solid angle of light ωi
• Solid angle of view ωo
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz7 20/10/2011
8. Bidirectional Reflectance Distribution Function
• Example BRDF
• sampled at discrete angles
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz8 20/10/2011
ωi
ωo
BRDF TabulatedExample: BRDF on Sphere Discrete sample
positions on
Hemisphere
Specular reflection
9. Model-driven vs. Data-driven
• Matusik et al. 2003 [1]:
Measured BRDF ground-truth data
• 100 real world materials
• 1° resolution for view-
and light directions
• > 1,000,000 samples
• Ngan et al. 2005 [2]:
Experimental analysis
of BRDF models
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz9 20/10/2011
10. Model-driven vs. Data-driven
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz10 20/10/2011
Red phenolic
Measured distribution from [1]
Fitted model (Cook-Torrance) from [2]
11. Model-driven vs. Data-driven
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz11 20/10/2011
Red phenolic
Measured distribution from [1]
Fitted model (Cook-Torrance) from [2]
12. Data-driven Reflectance
• Example Surface
with Meso structure
• Results are „ABRDFs“
(apparent BRDFs [3])
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz12 20/10/2011
ωi
ωo
ωi
ωo
Specular reflection
Retro-reflection
Hard to fit with
analytical model
influence from
neighborhood
13. Model-driven Reflectance
• Loss of Mesoscale depth impression…
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz13 20/10/2011
Fitted analytical SVBRDF Photograph
McAllister 2002 [10]
14. Data-driven Reflectance
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz14 20/10/2011
Texture Mesoscale
approximated by
Bump-mapping
Data-Driven
(BTF)
Images taken from
Müller et al. 2005 [7]
15. Form of Representation
Macroscopic scale
•3D shape
•Explicit representation
(e.g. polygon mesh)
Mesoscopic scale
•Individually resolved by
human perception
•Statistical representation
not accurate
•Explicit representation
too costly
Microscopic scale
•Alignment of microscopic
structures
•Statistical representation
(e.g. BRDF)
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz15 20/10/2011
16. Form of Representation
Macroscopic scale
•3D shape
•Explicit representation
(e.g. polygon mesh)
Mesoscopic scale
•Individually resolved by
human perception
•Statistical representation
not accurate
•Explicit representation
too costly
Microscopic scale
•Alignment of microscopic
structures
•Statistical representation
(e.g. BRDF)
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz16 20/10/2011
Data-driven:
Image based
17. Form of Representation
Macroscopic scale
•3D shape
•Explicit representation
(e.g. polygon mesh)
Mesoscopic scale
•Individually resolved by
human perception
•Statistical representation
not accurate
•Explicit representation
too costly
Microscopic scale
•Alignment of microscopic
structures
•Statistical representation
(e.g. BRDF)
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz17 20/10/2011
Bidirectional Texture Function
19. First use of BTF: CUReT Database
• 1996 – 1999 by Dana et al. [4]
• 61 materials
• 205 different view- and light directions
• 24-bit RGB images
Manual placement of camera
BTF only partially measured
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz19 20/10/2011
20. University of Bonn BTF Database
• 2001 – 2003 Sarlette et al. [5]
• 6561 view and light directions
• 36-bit RGB images
• Fully automated
• 12 hrs
per acquisition
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz20 20/10/2011
21. University of Bonn Multiview Dome
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz21 20/10/2011
• 2004 – now by Sarlette et al.
• 22,801 view and light directions
• HDR images
• Fully automated
• No moving parts
• 2hrs per acquisition
22. First Capture of Complete Objects with BTF
• Furukawa et al. 2002 [6]
• Laser scanned geometry
• Separate BTF capture
Very sparse sampling
Alignment errors
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz22 20/10/2011
Photograph Rendering
23. First Capture of Complete Objects with BTF
• Furukawa et al. 2002 [6]
• Laser scanned geometry
• Separate BTF capture
Very sparse sampling
Alignment errors
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz23 20/10/2011
Photograph Rendering
24. Integrated Acquisition of Objects with BTF
• Müller et al. 2005 [7]
• Use University of Bonn Dome
• Dense sampling (22,801), 2hrs/acquisition
• Measurements integrated in one setup
• No registration necessary
• Geometry via Shape-from-Silhouette
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz24 20/10/2011
25. Integrated Acquisition of Objects with BTF
• Müller et al. 2005
Drawbacks:
No radiometric calibration
• Misleading colors
• Only LDR
Shape-from-Silhouette
• Not automatable
• Coarse geometry
– Misleading Shape
– Blur due to misalignment
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz25 20/10/2011
Example from Havemann et al. 2008 [12]
26. Integrated HQ Acquisition
• Holroyd et al. 2010 [17]
• Integrated setup
• Geometry with Structured Light
• 42 view and light directions
• 5hrs per acquisition
Sparse sampling Model-driven
• Fit Cook-torrance BRDFs
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz26 20/10/2011
Photograph Rendering
27. Integrated HQ Acquisition
• Holroyd et al. 2010 [17]
• Integrated setup
• Geometry with Structured Light
• 42 view and light directions
• 5hrs per acquisition
Sparse sampling Model-driven
• Fit Cook-torrance BRDFs
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz27 20/10/2011
Photograph Rendering
28. Integrated HQ Acquisition with BTF
• Schwartz et al. 2011 [8]
• Use University of Bonn Dome
• Extended with projectors for Structured Light
• integrated measurement
• Rapid: 3.7 hrs per acquisition
• Proper calibration and HDR
• Geometry: Weinmann et al. 2011 [11]
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz28 20/10/2011
Visual Hull [7], [12] Laser Scan Proposed Method [11]
30. Faithfulness
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz30 20/10/2011
Photographic
picture
(tonemapped HDR)
BTF + Geometry
Schwartz et al. 2011 [8]
(tonemapped HDR)
31. Polynomial Texture Maps
• Malzbender et al. 2001 [9]: PTM
• Image-based, Sampling of Light (ωi)
Incomplete appearance information
• View-dependent part of reflectance missing
For 3D Objects: only one fixed viewpoint
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz31 20/10/2011
PTM
Texture
32. Faithfulness
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz32 20/10/2011
Photographic
picture
(tonemapped HDR)
BTF + Geometry
Schwartz et al. 2011 [8]
(tonemapped HDR)
Polynomial Texture Map
Malzbender et al. 2001 [9]
(Single view and LDR!)
33. Multiview PTMs
• Gunawardane et al. 2009 [18]:
• PTMs from multiple viewpoints
No macro scale
interpolation of views via optical flow
Limited amount of views
• Combining multiple objects is hard
• incorrect silhouttes, occlusion,
shadows, etc.
• Full light transport
(i.e. path-tracing)
not possible
33 20/10/2011
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher
Schwartz
34. Full Light Transport with BTFs
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz34 20/10/2011
35. Full Light Transport with BTFs
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz35 20/10/2011
37. Datasizes
• Schwartz et al. 2011 [8]:
• Uncompressed BTF:
≈ 500 GB per object
• Not feasible even for
offline rendering…
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz37 20/10/2011
38. BTF Compression
• Fitting analytical models
• Wu et al. 2011 [15]: SPMM
• Model-driven…
• lost meso-structure
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz38 20/10/2011
BTF
SPMM
39. BTF Compression
• High degree of
redundancy in the BTF
• Perform statistical
data analysis
• Find low dimensional basis
• learn how to best describe the data
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz39 20/10/2011
40. Compression using Statistical Analysis
• Organization of the discrete BTF
• As matrix • As Tensor
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz40 20/10/2011
Angles ωi, ωo
Pixels(x,y,color)
Pixels (x,y)
ωo
Note:
Color (or wavelength) can be
additional dimension
41. Full Matrix Factorization
• E.g. Liu et al. 2004 [14]: FMF
• Representation is compact
and realtime renderable [5]
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz41 20/10/2011
…
...
M
…V
Angular components
“Eigen-ABRDFs“
U …
Spatial components
“Eigen-Textures“SVD
T
M USV
Importance
42. Decorrelated Full Matrix Factorization
• Gero Müller 2009 [13]: DFMF
• Insight from image compression (e.g. JPEG)
• Human perception is more sensitive to variation in
intensity than color
• Also: chromacity in images exhibits less variation
• Decorrelate the BTF data into luminance and
chrominance
– BTFRGB BTFY BTFU BTFV
• Use fewer components for chrominance channels U, V
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz42 20/10/2011
43. FMF Rendering
• Uncompressed: ≈ 500GB
• Compressed: ≈ 640MB
• 32 components
• Even fits on the GPU
• Random access to BTF
• Angular combination
a = (ωi, ωo)
• Pixel p = (x,y)
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz43 20/10/2011
components
Angular
component #1
Spatial
component #1
Angular
component #2
Spatial
component #2
…
…
pixelangles
BTF(a,p) = < , >
44. Interactive Inspection via GPU Rendering
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz44 20/10/2011
45. Streaming of BTF over the Internet
• Schwartz et al. 2011 [16]:
• Spatial components
≈ natural images
• Angular components
≈ low frequency
• Apply additional wavelet
compression
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz45 20/10/2011
0.4bpp
Wavelet
16bpp
Reference
46. Streaming of BTF over the Internet
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz46 20/10/2011
0.87 MB
First renderable
version
1 MB 7 MB 46.4 MB
Fully
transmitted
534 GB
Reference
47. Questions?
Learn more about
BTFs for Cultural
Heritag on
Wednesday [8]
and Thursday [16]
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz47 20/10/2011
48. References
[1] “A Data-Driven Reflectance Model”, Matusik W., Pfister H., Brand M. and McMillan L., ACM TOG 22, 3(2003),
759-769.
[2] “Experimental Analysis of BRDF models”, Ngan A., Durand F. and Matusik W., Proceedings of EGSR, 2005, 117-
226.
[3] „Image-based Rendering with Controllable Illumination“, Wong T., Heng P., Or S., and Ng W., Proceedings of
EGWR, 1997, 13-22
[4] „Reflectance and Texture of Real-World Surfaces “, Dana K.J., van Ginneken B., Nayar S.K. and Koenderink J.J.,
Proceedings of CVPR, 1997, 151-157
[5] „Efficient and Realistic Visualization of Cloth“, Sattler M., Sarlette R. and Klein R., Proceedings of EGSR, 2003
[6]„Appearance based object modeling using texture database: acquisition, compression and rendering“, Furukawa
R., Kawasaki H. Ikeuchi K. and Sakauchi M., Proceedings of EGRW 2002, 257-266
[7] „Rapid Synchronous Acquisition of Geometry and BTF for Cultural Heritage Artefacts“, Müller G., Bendels G.H.
and Klein R., Proceedings of VAST, 2005
[8] „Integrated High-Quality Acquisition of Geometry and Appearance for Cultural Heritage”, Schwartz C., Weinmann
W., Roland R. and Klein R., Proceedings of VAST, 2011
[9] „Polynomial Texture Maps“, Malzbender T., Gelb D. and Wolters H., Proceedings of SIGGRAPH, 2001
[10] „A Generalized Surface Appearance Representation For Computer Graphics“, McAllister D.K., PhD. Thesis,
University of North Carolina at Chapel Hill, 2002
[11] „A Multi-Camera, Multi-Projector Super-Resolution Framework for Structured Light“, Weinmann M., Schwartz
C., Ruiters R. and Klein R., Proceedings of 3DIMPVT, 2011, 397-404
[12] „The Presentation of Cultural Heritage Models in Epoch“, Havemann S., Settgast V., Fellner D., Willems G., Van
Gool, L., Müller G., Schneider M. and Klein R., EPOCH Conference on Open Digital CH Systems, 2008
[13] „Data-Driven Methods for Compression and Editing of Spatially Varying Appearance“, Müller G., PhD. Thesis,
University of Bonn, 2009
[14] „Synthesis and Rendering of Bidirectional Texture Functions on Arbitrary Surfaces“, Liu X., Hu Y., Zhang J. Tong
X., Guo B. and Shum H.-Y., IEEE Transactions on Visualization and Computer Graphics 10 (3), 2004, 278-289
[15] „A Sparse Parametric Mixture Model for BTF Compression, Editing and Rendering“, Wu H., Dorsey J. and
Rushmeier H., Computer Graphics Forum 30 (2), 2011, 465-473
[16] „WebGL-based Streaming and Presentation Framework for Bidirectional Texture Function“, Schwartz C., Ruiters
R., Weinmann M. and Klein R., Proceedings of VAST, 2011
[17] „A coaxial optica scanner for synchronous acquisition of 3D geometry and surface reflectance“, Holroyd M.,
Lawrence J. and Zickler T., ACM Trans. Graph. 29 (4), 2010
[18] „Optimized Image Sampling for View and Light Interpolation“, Gunawardane P., Wang O., Scher S., Rickards I.,
Davis J. And Malzbender T., Proceedings of VAST, 2009
[19] „Principles and practices of robust, photography-based digital imaging techniques for museums.“, Mudge M.,
Schroer C., Ear G., Martinez K., Pagi H., Toler-Franklin C., Rusinkiewicz S., Palma G., Wochowiak M., Ashley M.,
Matthews N., Noble T. and Dellepiane M., Proceedings of VAST, 2010
3D COFORM STAR – VAST 2011 Prato, Italy – Christopher Schwartz48 20/10/2011