The document discusses various graphics rendering techniques including:
1. Deferred rendering using G-buffers to store albedo, metallic, roughness, normal, and height data from a single pass.
2. Shadow mapping for directional, point, and spot lights using different geometries.
3. Post-processing techniques like bloom, depth of field, tone mapping, screen space ambient occlusion (SSAO), and screen space reflections (SSR).
4. Tone mapping HDR renders to 8-bit using techniques like Reinhard curve to preserve details and contrast.
Google Sky aims to provide an online sky map with data from various surveys like SDSS and DSS. It has ingested over 200 square degrees of SDSS data so far and is working to optimize the processing pipeline to ingest more data. Key challenges include balancing image quality, processing time, and data storage requirements at scale.
DynamicFusion is a method for reconstructing and tracking non-rigid scenes in real-time by extending KinectFusion. It uses a volumetric truncated signed distance function (TSDF) to integrate depth maps from multiple viewpoints into a global reconstruction. Live depth frames are aligned to a dense surface prediction generated by raycasting the TSDF. This closes the loop between mapping and localization for tracking dynamic, non-rigid scenes.
論文紹介"DynamicFusion: Reconstruction and Tracking of Non-‐rigid Scenes in Real...Ken Sakurada
CVPR2015(Best Paper Award)の論文紹介
"DynamicFusion: Reconstruction and Tracking of Non-‐rigid Scenes in Real-‐Time"
Richard A. Newcombe, Dieter Fox, Steven M. Seitz
内容に関して何かお気づきになりましたら,スライドに記載されているメールアドレスにご連絡頂けると幸いです
Wave-Based Non-Line-of-Sight Imaging Using Fast f–k Migration | SIGGRAPH 2019David Lindell
This document describes wave-based non-line-of-sight (NLOS) imaging using fast frequency-wavenumber (f-k) migration. It presents a new hardware prototype for room-sized NLOS imaging and interactive scanning. It also introduces a fast wave-based image formation model using the light-cone transform that enables reconstruction in under a second, compared to over an hour for previous methods. The document outlines the f-k migration method and compares it to other NLOS imaging approaches.
Tutorial on Object Detection (Faster R-CNN)Hwa Pyung Kim
The document describes Faster R-CNN, an object detection method that uses a Region Proposal Network (RPN) to generate region proposals from feature maps, pools features from each proposal into a fixed size using RoI pooling, and then classifies and regresses bounding boxes for each proposal using a convolutional network. The RPN outputs objectness scores and bounding box adjustments for anchor boxes sliding over the feature map, and non-maximum suppression is applied to reduce redundant proposals.
Hashing has witnessed an increase in popularity over the
past few years due to the promise of compact encoding and fast query
time. In order to be effective hashing methods must maximally preserve
the similarity between the data points in the underlying binary representation.
The current best performing hashing techniques have utilised
supervision. In this paper we propose a two-step iterative scheme, Graph
Regularised Hashing (GRH), for incrementally adjusting the positioning
of the hashing hypersurfaces to better conform to the supervisory signal:
in the first step the binary bits are regularised using a data similarity
graph so that similar data points receive similar bits. In the second
step the regularised hashcodes form targets for a set of binary classifiers
which shift the position of each hypersurface so as to separate opposite
bits with maximum margin. GRH exhibits superior retrieval accuracy to
competing hashing methods.
From Darkness, Light: Computing Cosmological ReionizationCosmoAIMS Bassett
1) Reionization occurred between redshifts of 10-6, beginning around 10 billion years ago and ending around 1 billion years ago.
2) Observations of the CMB and galaxies at z>6 provide constraints but questions remain about the sources and topology of reionization.
3) Cosmological simulations of reionization must model structure formation, radiation transport, and non-equilibrium chemistry and physics to help address open questions.
The document discusses various graphics rendering techniques including:
1. Deferred rendering using G-buffers to store albedo, metallic, roughness, normal, and height data from a single pass.
2. Shadow mapping for directional, point, and spot lights using different geometries.
3. Post-processing techniques like bloom, depth of field, tone mapping, screen space ambient occlusion (SSAO), and screen space reflections (SSR).
4. Tone mapping HDR renders to 8-bit using techniques like Reinhard curve to preserve details and contrast.
Google Sky aims to provide an online sky map with data from various surveys like SDSS and DSS. It has ingested over 200 square degrees of SDSS data so far and is working to optimize the processing pipeline to ingest more data. Key challenges include balancing image quality, processing time, and data storage requirements at scale.
DynamicFusion is a method for reconstructing and tracking non-rigid scenes in real-time by extending KinectFusion. It uses a volumetric truncated signed distance function (TSDF) to integrate depth maps from multiple viewpoints into a global reconstruction. Live depth frames are aligned to a dense surface prediction generated by raycasting the TSDF. This closes the loop between mapping and localization for tracking dynamic, non-rigid scenes.
論文紹介"DynamicFusion: Reconstruction and Tracking of Non-‐rigid Scenes in Real...Ken Sakurada
CVPR2015(Best Paper Award)の論文紹介
"DynamicFusion: Reconstruction and Tracking of Non-‐rigid Scenes in Real-‐Time"
Richard A. Newcombe, Dieter Fox, Steven M. Seitz
内容に関して何かお気づきになりましたら,スライドに記載されているメールアドレスにご連絡頂けると幸いです
Wave-Based Non-Line-of-Sight Imaging Using Fast f–k Migration | SIGGRAPH 2019David Lindell
This document describes wave-based non-line-of-sight (NLOS) imaging using fast frequency-wavenumber (f-k) migration. It presents a new hardware prototype for room-sized NLOS imaging and interactive scanning. It also introduces a fast wave-based image formation model using the light-cone transform that enables reconstruction in under a second, compared to over an hour for previous methods. The document outlines the f-k migration method and compares it to other NLOS imaging approaches.
Tutorial on Object Detection (Faster R-CNN)Hwa Pyung Kim
The document describes Faster R-CNN, an object detection method that uses a Region Proposal Network (RPN) to generate region proposals from feature maps, pools features from each proposal into a fixed size using RoI pooling, and then classifies and regresses bounding boxes for each proposal using a convolutional network. The RPN outputs objectness scores and bounding box adjustments for anchor boxes sliding over the feature map, and non-maximum suppression is applied to reduce redundant proposals.
Hashing has witnessed an increase in popularity over the
past few years due to the promise of compact encoding and fast query
time. In order to be effective hashing methods must maximally preserve
the similarity between the data points in the underlying binary representation.
The current best performing hashing techniques have utilised
supervision. In this paper we propose a two-step iterative scheme, Graph
Regularised Hashing (GRH), for incrementally adjusting the positioning
of the hashing hypersurfaces to better conform to the supervisory signal:
in the first step the binary bits are regularised using a data similarity
graph so that similar data points receive similar bits. In the second
step the regularised hashcodes form targets for a set of binary classifiers
which shift the position of each hypersurface so as to separate opposite
bits with maximum margin. GRH exhibits superior retrieval accuracy to
competing hashing methods.
From Darkness, Light: Computing Cosmological ReionizationCosmoAIMS Bassett
1) Reionization occurred between redshifts of 10-6, beginning around 10 billion years ago and ending around 1 billion years ago.
2) Observations of the CMB and galaxies at z>6 provide constraints but questions remain about the sources and topology of reionization.
3) Cosmological simulations of reionization must model structure formation, radiation transport, and non-equilibrium chemistry and physics to help address open questions.
Jonathan Lefman presents his work on Superresolution chemical microscopyJonathan Lefman
This document discusses several microscopy techniques including structured illumination fluorescence microscopy, time-of-flight secondary ion mass spectrometry, coherent anti-Stokes Raman scattering microscopy, photoactivated localization microscopy, stimulated emission depletion microscopy, and 4Pi microscopy. It focuses on describing improvements made to structured illumination fluorescence microscopy including parallel GPU processing to accelerate image analysis and a new automated imaging framework. Time-of-flight secondary ion mass spectrometry imaging is discussed with applications to iterative clustering and classification analysis.
Evaluation of geometrical parameters of buildings from SAR imagesFederico Ariu
This document discusses evaluating geometrical parameters of buildings from SAR images. It presents an algorithm developed to retrieve building orientation, height, and double scattering lines. SAR imaging pros and cons are explained. Electromagnetic scattering models for single and double scattering are described. Simulations are shown for 512x512 and ERS-1 C sensors to test the algorithm under different building orientations. The algorithm uses linear regression to determine building orientation from double scattering lines. Conclusions state the algorithm can accurately retrieve parameters given sufficient dots on scattering lines but the range of evaluable angles is limited.
Geometry shaders allow processing and generation of additional geometry in the graphics pipeline, and can be used for optimizations like mesh refinement, level of detail adjustments based on distance from the camera, and culling of triangles outside the view frustum to improve performance. The document describes a sample application that combines these optimizations using geometry shaders, first refining geometry, then dynamically adjusting level of detail, and culling outside the frustum for an over 11x improvement in framerate. Geometry shaders require specific extensions and hardware support from modern graphics cards.
Generation and weighting of 3D point correspondences for improved registratio...Kourosh Khoshelham
1. The document discusses methods for improving registration of RGB-D frames by generating accurate 3D point correspondences from 2D keypoints and weighting correspondences based on depth uncertainty.
2. It describes estimating the relative orientation of RGB and depth cameras, searching along epipolar lines to find corresponding 3D points, and assigning lower weights to points with higher random depth errors.
3. The results show that weighted registration outperforms non-weighted, producing more accurate trajectories with lower closing distances and angles between start and end frames.
Urban 3D Semantic Modelling Using Stereo Vision, ICRA 2013Sunando Sengupta
1) Given a sequence of stereo images, the pipeline generates a dense 3D semantic model of the urban environment.
2) Depth maps are generated from stereo images and fused into a volumetric representation using camera poses from feature tracking.
3) Semantic segmentation of street view images is done using a CRF model, and labels are projected onto the 3D model faces to generate the semantic model.
4) The semantic model is evaluated by projecting it back to the input images and calculating metrics like recall and intersection over union. Future work includes real-time implementation and combining image and geometric context.
Tennis video shot classification based on support vectores712
This document proposes a tennis video shot classification method based on support vector machines. It extracts edge distribution, optical flow, and shot classification features. Optical flow features include foreground tracked point ratio and mean motion vector length. These 7 features are input to a multi-class SVM classifier to categorize shots as long shot, audience shot, close shot, or close-up shot. An experiment applies this approach to tennis videos and achieves higher accuracy than methods based solely on color distribution.
The document discusses object detection in aerial images using rotated bounding boxes (RBOX). It describes how traditional horizontal bounding boxes (HBOX) are limited for aerial images and introduces RBOX defined by center point, height, angle, and width. It also presents a new "Gliding Vertex" method to calculate RBOX that improves stability over using angle alone. The document outlines the dataset features, preprocessing methods including splitting large images and oversampling rare classes, and an ensemble Faster R-CNN model with RBOX and Feature Pyramid Network that achieved a mAP score of 0.750.
This document discusses techniques for building 3D worlds from topographic data. It covers topics like tessellating terrain meshes using triangle strips, implementing level of detail using a quadtree structure, projecting topographic data onto a sphere, calculating normals, adding water and atmosphere effects, and using fractal noise to add detail. The goal is to efficiently render large planetary datasets with high visual quality while maintaining real-time performance.
A computationally efficient method for sequential MAP-MRF cloud detectionBeniamino Murgante
A computationally efficient method for sequential MAP-MRF cloud detection
Paolo Addesso, Roberto Conte, Maurizio Longo, Rocco Restaino, Gemine Vivone
- University of Salerno
An Analysis of Convolution for InferenceIntel Nervana
Scott Gray presents at the 2016 ICML conference. Scott Gray went over various ways of computing convolution in the workshop on "On-device Intelligence".
Mask R-CNN extends Faster R-CNN by adding a branch for predicting segmentation masks in parallel with bounding box recognition and classification. It introduces a new layer called RoIAlign to address misalignment issues in the RoIPool layer of Faster R-CNN. RoIAlign improves mask accuracy by 10-50% by removing quantization and properly aligning extracted features. Mask R-CNN runs at 5fps with only a small overhead compared to Faster R-CNN.
(Paper Review)3D shape reconstruction from sketches via multi view convolutio...MYEONGGYU LEE
review date: 2019/03/20 (by Meyong-Gyu.LEE @Soongsil Univ.)
Korean review of '3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks'(CVPR 2017)
Build Your Own 3D Scanner:
Conclusion
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
1) The document describes a real-time GPU implementation of visual smoke simulation using the incompressible Navier-Stokes equations.
2) Key steps in the simulation algorithm include adding forces, advecting velocity and scalar fields, solving for pressure, projecting the velocity field, and applying boundary conditions.
3) Volume rendering is achieved by slicing the 3D grid from the viewer's perspective and compositing the slices using the "under" operator, implementing shadows using half-angle slicing.
- R-CNN was the first CNN model to achieve high performance in object detection. It used a multi-stage pipeline involving region proposals, feature extraction via CNN, and SVM classification. It was slow due to computing CNN features for each region individually.
- Fast R-CNN improved on R-CNN by introducing a ROI pooling layer to share computation and enabling end-to-end training. However, region proposals were still generated externally, slowing down detection.
- Faster R-CNN addressed this by introducing a Region Proposal Network to generate proposals, allowing the entire model to be trained end-to-end. This led to faster and more accurate detection compared to previous models.
- YOLO
RasterFrames: Enabling Global-Scale Geospatial Machine LearningAstraea, Inc.
RasterFrames™, a proposed LocationTech project, brings the power of Spark SQL and Spark ML to the analysis of global-scale geospatial-temporal raster data. Employing the rich geospatial primitives of LocationTech GeoTrellis and GeoMesa, RasterFrames provides scientists, data scientists and software developers with a unified data and compute model for building image processing pipelines for ETL, data-product creation, statistical analysis, supervised & unsupervised machine learning, and deep learning. Data scientists particularly benefit from the DataFrame-centric entrypoint into big data geospatial analytics.
This talk will introduce RasterFrames, explaining the need it fulfills, the capabilities it provides, and context for determining if RasterFrames is right for the problems you're trying to solve.
By Simeon Fitch
Soft Shadow Maps for Linear Lights is a technique for rendering soft shadows using a small number of samples from linear light sources. It uses traditional shadow maps to render umbra and lit regions, and linearly interpolates visibility for penumbra regions based on a visibility map. This technique is suitable for real-time rendering as it requires minimal overhead and only needs to recompute the shadow maps when the light or scene changes. It produces high quality soft shadows using as few as two light source samples.
Point cloud mesh-investigation_report-lihangLihang Li
This document discusses surface reconstruction methods for point clouds captured using Kinect. It describes meshing methods used in RTABMAP and RGBDMapping including greedy projection triangulation and moving least squares smoothing. Popular surface reconstruction pipelines generally involve subsampling, normal estimation, surface reconstruction using methods like Poisson surface reconstruction, and recovering original colors. Key steps are filtering noise, estimating surface normals, reconstructing implicit surfaces, and transferring attributes back to original points.
BEFLIX is an embedded domain-specific language for generating computer animated films. BEFLIX was created by Ken Knowlton in 1963 for the IBM 7090 mainframe computer with a Stromberg-Carlson SC2040 microfilm recorder for output. Ken Knowlton created BEFLIX while working at Bell Laboratories and used it to make a number of artistic, educational and engineering films.
Recovering Inner Slices of Translucent Objects by Multi-frequency Illuminatio...Kenichiro Tanaka
This document describes a method for recovering the inner layers of translucent objects using multi-frequency illumination. The method involves:
1. Projecting multiple high-frequency patterns onto an object from different pitches.
2. Extracting the "direct components" or non-blurred reflections from each layer based on the blurriness of the reflections.
3. Formulating the problem as an optimization that estimates the direct components for each layer to recover a clear image of the inner layers.
The method is tested on layered translucent papers and is proposed to have applications in recovering hidden information in oil paintings, ancient documents, and medical and forensic imaging of layered surfaces.
This is a slide for IEEE International Conference on Computational Photography (ICCP) 2016 in Northwestern University.
See for details: http://omilab.naist.jp/project/LFseg/
Jonathan Lefman presents his work on Superresolution chemical microscopyJonathan Lefman
This document discusses several microscopy techniques including structured illumination fluorescence microscopy, time-of-flight secondary ion mass spectrometry, coherent anti-Stokes Raman scattering microscopy, photoactivated localization microscopy, stimulated emission depletion microscopy, and 4Pi microscopy. It focuses on describing improvements made to structured illumination fluorescence microscopy including parallel GPU processing to accelerate image analysis and a new automated imaging framework. Time-of-flight secondary ion mass spectrometry imaging is discussed with applications to iterative clustering and classification analysis.
Evaluation of geometrical parameters of buildings from SAR imagesFederico Ariu
This document discusses evaluating geometrical parameters of buildings from SAR images. It presents an algorithm developed to retrieve building orientation, height, and double scattering lines. SAR imaging pros and cons are explained. Electromagnetic scattering models for single and double scattering are described. Simulations are shown for 512x512 and ERS-1 C sensors to test the algorithm under different building orientations. The algorithm uses linear regression to determine building orientation from double scattering lines. Conclusions state the algorithm can accurately retrieve parameters given sufficient dots on scattering lines but the range of evaluable angles is limited.
Geometry shaders allow processing and generation of additional geometry in the graphics pipeline, and can be used for optimizations like mesh refinement, level of detail adjustments based on distance from the camera, and culling of triangles outside the view frustum to improve performance. The document describes a sample application that combines these optimizations using geometry shaders, first refining geometry, then dynamically adjusting level of detail, and culling outside the frustum for an over 11x improvement in framerate. Geometry shaders require specific extensions and hardware support from modern graphics cards.
Generation and weighting of 3D point correspondences for improved registratio...Kourosh Khoshelham
1. The document discusses methods for improving registration of RGB-D frames by generating accurate 3D point correspondences from 2D keypoints and weighting correspondences based on depth uncertainty.
2. It describes estimating the relative orientation of RGB and depth cameras, searching along epipolar lines to find corresponding 3D points, and assigning lower weights to points with higher random depth errors.
3. The results show that weighted registration outperforms non-weighted, producing more accurate trajectories with lower closing distances and angles between start and end frames.
Urban 3D Semantic Modelling Using Stereo Vision, ICRA 2013Sunando Sengupta
1) Given a sequence of stereo images, the pipeline generates a dense 3D semantic model of the urban environment.
2) Depth maps are generated from stereo images and fused into a volumetric representation using camera poses from feature tracking.
3) Semantic segmentation of street view images is done using a CRF model, and labels are projected onto the 3D model faces to generate the semantic model.
4) The semantic model is evaluated by projecting it back to the input images and calculating metrics like recall and intersection over union. Future work includes real-time implementation and combining image and geometric context.
Tennis video shot classification based on support vectores712
This document proposes a tennis video shot classification method based on support vector machines. It extracts edge distribution, optical flow, and shot classification features. Optical flow features include foreground tracked point ratio and mean motion vector length. These 7 features are input to a multi-class SVM classifier to categorize shots as long shot, audience shot, close shot, or close-up shot. An experiment applies this approach to tennis videos and achieves higher accuracy than methods based solely on color distribution.
The document discusses object detection in aerial images using rotated bounding boxes (RBOX). It describes how traditional horizontal bounding boxes (HBOX) are limited for aerial images and introduces RBOX defined by center point, height, angle, and width. It also presents a new "Gliding Vertex" method to calculate RBOX that improves stability over using angle alone. The document outlines the dataset features, preprocessing methods including splitting large images and oversampling rare classes, and an ensemble Faster R-CNN model with RBOX and Feature Pyramid Network that achieved a mAP score of 0.750.
This document discusses techniques for building 3D worlds from topographic data. It covers topics like tessellating terrain meshes using triangle strips, implementing level of detail using a quadtree structure, projecting topographic data onto a sphere, calculating normals, adding water and atmosphere effects, and using fractal noise to add detail. The goal is to efficiently render large planetary datasets with high visual quality while maintaining real-time performance.
A computationally efficient method for sequential MAP-MRF cloud detectionBeniamino Murgante
A computationally efficient method for sequential MAP-MRF cloud detection
Paolo Addesso, Roberto Conte, Maurizio Longo, Rocco Restaino, Gemine Vivone
- University of Salerno
An Analysis of Convolution for InferenceIntel Nervana
Scott Gray presents at the 2016 ICML conference. Scott Gray went over various ways of computing convolution in the workshop on "On-device Intelligence".
Mask R-CNN extends Faster R-CNN by adding a branch for predicting segmentation masks in parallel with bounding box recognition and classification. It introduces a new layer called RoIAlign to address misalignment issues in the RoIPool layer of Faster R-CNN. RoIAlign improves mask accuracy by 10-50% by removing quantization and properly aligning extracted features. Mask R-CNN runs at 5fps with only a small overhead compared to Faster R-CNN.
(Paper Review)3D shape reconstruction from sketches via multi view convolutio...MYEONGGYU LEE
review date: 2019/03/20 (by Meyong-Gyu.LEE @Soongsil Univ.)
Korean review of '3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks'(CVPR 2017)
Build Your Own 3D Scanner:
Conclusion
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
1) The document describes a real-time GPU implementation of visual smoke simulation using the incompressible Navier-Stokes equations.
2) Key steps in the simulation algorithm include adding forces, advecting velocity and scalar fields, solving for pressure, projecting the velocity field, and applying boundary conditions.
3) Volume rendering is achieved by slicing the 3D grid from the viewer's perspective and compositing the slices using the "under" operator, implementing shadows using half-angle slicing.
- R-CNN was the first CNN model to achieve high performance in object detection. It used a multi-stage pipeline involving region proposals, feature extraction via CNN, and SVM classification. It was slow due to computing CNN features for each region individually.
- Fast R-CNN improved on R-CNN by introducing a ROI pooling layer to share computation and enabling end-to-end training. However, region proposals were still generated externally, slowing down detection.
- Faster R-CNN addressed this by introducing a Region Proposal Network to generate proposals, allowing the entire model to be trained end-to-end. This led to faster and more accurate detection compared to previous models.
- YOLO
RasterFrames: Enabling Global-Scale Geospatial Machine LearningAstraea, Inc.
RasterFrames™, a proposed LocationTech project, brings the power of Spark SQL and Spark ML to the analysis of global-scale geospatial-temporal raster data. Employing the rich geospatial primitives of LocationTech GeoTrellis and GeoMesa, RasterFrames provides scientists, data scientists and software developers with a unified data and compute model for building image processing pipelines for ETL, data-product creation, statistical analysis, supervised & unsupervised machine learning, and deep learning. Data scientists particularly benefit from the DataFrame-centric entrypoint into big data geospatial analytics.
This talk will introduce RasterFrames, explaining the need it fulfills, the capabilities it provides, and context for determining if RasterFrames is right for the problems you're trying to solve.
By Simeon Fitch
Soft Shadow Maps for Linear Lights is a technique for rendering soft shadows using a small number of samples from linear light sources. It uses traditional shadow maps to render umbra and lit regions, and linearly interpolates visibility for penumbra regions based on a visibility map. This technique is suitable for real-time rendering as it requires minimal overhead and only needs to recompute the shadow maps when the light or scene changes. It produces high quality soft shadows using as few as two light source samples.
Point cloud mesh-investigation_report-lihangLihang Li
This document discusses surface reconstruction methods for point clouds captured using Kinect. It describes meshing methods used in RTABMAP and RGBDMapping including greedy projection triangulation and moving least squares smoothing. Popular surface reconstruction pipelines generally involve subsampling, normal estimation, surface reconstruction using methods like Poisson surface reconstruction, and recovering original colors. Key steps are filtering noise, estimating surface normals, reconstructing implicit surfaces, and transferring attributes back to original points.
BEFLIX is an embedded domain-specific language for generating computer animated films. BEFLIX was created by Ken Knowlton in 1963 for the IBM 7090 mainframe computer with a Stromberg-Carlson SC2040 microfilm recorder for output. Ken Knowlton created BEFLIX while working at Bell Laboratories and used it to make a number of artistic, educational and engineering films.
Recovering Inner Slices of Translucent Objects by Multi-frequency Illuminatio...Kenichiro Tanaka
This document describes a method for recovering the inner layers of translucent objects using multi-frequency illumination. The method involves:
1. Projecting multiple high-frequency patterns onto an object from different pitches.
2. Extracting the "direct components" or non-blurred reflections from each layer based on the blurriness of the reflections.
3. Formulating the problem as an optimization that estimates the direct components for each layer to recover a clear image of the inner layers.
The method is tested on layered translucent papers and is proposed to have applications in recovering hidden information in oil paintings, ancient documents, and medical and forensic imaging of layered surfaces.
This is a slide for IEEE International Conference on Computational Photography (ICCP) 2016 in Northwestern University.
See for details: http://omilab.naist.jp/project/LFseg/
The advancement of technologies in the last decade is no surprise to anyone. LiDAR although new to some has been around for some time, haven’t heard of it? What are you missing?
This presentation will briefly explain what LiDAR is and how it is gathered, but the primary focus will be on how the Energy Sector can benefit from its use. We will show several example project datasets, tools developed and available to industry, and how this growing technology can change how you look at your projects.
Additionally, we will show how involving all members of your exploration team in a LiDAR planning process will save time and money while increasing safety and reducing environmental impact. Your well location may begin with Geology calculating back to a surface location, but the finial location can affect many other decisions your company may make…good planning makes for good decisions.
Chris Martin is a Survey Technologist with 17 years of experience in the surveying profession. His field survey career has involved major projects with TransCanada Pipelines and Alliance Pipelines along with many conventional oil and gas programs. For the last 4 years he has primarily focussed on business development and is currently a Marketing Representative for Can-Am Geomatics.
Light Detection And Ranging (useless in slideshare, must be downloaded to pow...Nina Tvenge
LIDAR is an optical remote sensing technology that uses lasers to measure distances. It works by measuring the time delay between transmitting a laser pulse and receiving the reflected signal, which provides highly detailed 3D mapping. LIDAR has a wide variety of uses including archaeology, meteorology, geology, biology, military applications, vehicles, imaging, and 3D mapping.
Pop Art was an art movement that began in the late 1950s and used imagery from popular culture and everyday life. Pop artists blurred the lines between fine art and commercial art by using images and styles from advertisements, consumer goods, celebrities and other mass media sources. Andy Warhol was one of the most famous Pop Artists, known for works like Campbell's Soup Cans that used repetition and appropriated popular images. Pop Art challenged definitions of art by treating popular objects as art and reflecting the culture of 1960s America through use of new materials, technologies and methods of production.
Track 4 session 8 - st dev con 2016 - time of flightST_World
The document summarizes FlightSenseTM time-of-flight proximity and ranging sensor technology from STMicroelectronics. It describes the principle of direct time-of-flight measurement using light pulses and single photon avalanche diodes. ST provides integrated proximity and ranging sensor modules, development tools, and is a leading global supplier of time-of-flight technology with products shipping in over 100 million devices. Applications include ambient light sensing, gesture recognition, robotics, drones, laptop presence detection, and more.
Lidar (also written LIDAR, LiDAR or LADAR) is a remote sensing technology that measures distance by illuminating a target with a laser and analyzing the reflected light. Although thought by some to be an acronym of Light Detection And Ranging,[1] the term lidar was actually created as a portmanteau of "light" and "radar".[2][3] Lidar is popularly used as a technology to make high-resolution maps, with applications in geodesy, geomatics, archaeology, geography, geology, geomorphology, seismology, forestry, remote sensing, atmospheric physics,[4] airborne laser swath mapping (ALSM), laser altimetry, and contour mapping.
(DL輪読)Matching Networks for One Shot LearningMasahiro Suzuki
1. Matching Networks is a neural network architecture proposed by DeepMind for one-shot learning.
2. The network learns to classify novel examples by comparing them to a small support set of examples, using an attention mechanism to focus on the most relevant support examples.
3. The network is trained using a meta-learning approach, where it learns to learn from small support sets to classify novel examples from classes not seen during training.
Diamond West 3D Laser Scanning Presentationdustinwoomer
3D Survey and Visualization
Diamond West utilizes LIDAR (Light Detection and Ranging) 3D scanning technology to digitally reconstruct an existing environment in a 3D digital model with sub-centimeter accuracy.
Digitally scanned data can be modeled in 3D or converted to 2D drawings (plan view, elevation cross section, and topography) for export to any standard CADD platform. Scanned information is linked together to form a comprehensive digital image.
Applications include:
Architectural as-builts, historic preservation/archive, structural steel mapping/cataloging, conceptual design and interference checking, fabrication and construction inspection, manufacturing and reverse engineering, topographic mapping, accessibility renovations, civil traffic and utility planning, as-builts for plant facilities, movie industry; the list goes on. If you can see it, you can scan it.
Advantage to using LIDAR 3D Scanning Technology:
Reduces field work, reduces risk of liability for field crews, competitive cost to conventional surveying, dramatically increases available information without multiple site visits, creates (sub-centimeter) accurate 3D models, remote sensing/minimizes need for access to structures.
High-definition 3D scanning provides: more accurate base mapping, detailed information along structural facades from the ground to the sky, documentation of not only surface conditions but also building conditions. Due to the potential for design and construction abutting building façades, the sub-centimeter accuracy will be a vital design and construction quality control tool.
The scanned 3D model can also be used in the future as a visualization tool by generating “fly-by” animated movies and still frames from any requested vantage point.
In summary, the use of sub-centimeter accurate 3D survey data will provide the design team with the accuracy needed to ensure proper quality control for design and construction activities. This technology will also provide the community and decision-makers with an extremely useful visual aid tool in evaluating the proposed design against existing conditions.
This document discusses time-of-flight cameras and range imaging. It explains that time-of-flight cameras work by illuminating a scene with amplitude modulated light and measuring the phase difference between the transmitted and reflected light, which encodes distance information. It describes the correlation process used to measure distance at each pixel and discusses sources of error such as multipath interference and temperature drift. Real data examples from a 120x160 sensor are shown. Applications discussed include uses in the Kinect, 3D scanning, and potential future uses in mobile phones.
The document discusses Bayesian neural networks and related topics. It covers Bayesian neural networks, stochastic neural networks, variational autoencoders, and modeling prediction uncertainty in neural networks. Key points include using Bayesian techniques like MCMC and variational inference to place distributions over the weights of neural networks, modeling both model parameters and predictions as distributions, and how this allows capturing uncertainty in the network's predictions.
Sensors for Biometry and Recognition - 2016 Report by Yole DeveloppementYole Developpement
In a global biometric hardware market worth over $4B, traditional fingerprint/palm sensors still monopolize 95% of the market, but face and iris sensors lie in wait.
Fingerprint technology impressively dominates the market - but changes are expected
Due to historical reasons, like the criminal fingerprint database established by the FBI with ink-based techniques, fingerprint sensing is the most common biometric technology currently used, by far. We estimate that the annual revenues generated by fingerprint-based solutions are currently $4.25B, representing 95% of the hardware market. Fingerprint sensing dominates technologies like iris, face, palm or voice recognition, because it meets almost all the requirements of a “perfect” biometric recognition technology. It is robust, stable and repeatable, time-invariant, difficult to spoof, has a distinctive meaning, is "unique" amongst a population, accessible, easy to use and acceptably non-intrusive. All other biometric technologies do not yet fulfill those requirements as specifically as fingerprint technology. Hardware revenues generated by the other biometric technologies are relatively low, estimated at $250M, mostly from iris and face recognition. This report identifies the players in the biometric hardware market, and provides technology, market and trend evolution insights. The fingerprint market has experienced an incredible volume increase in the consumer market with the adoption of the active capacitance detection on an increasing number of smartphones to answer the demand of online identification, mobile payment and unlocking applications. Diverse technologies, such as optical, thermal, and Piezoelectric Micromachined Ultrasonic Transducers (PMUT), are also trying to penetrate the consumer market, but are still very limited. The industrial and homeland/security markets are still widely using optical technology. Other biometric solutions like iris, face or voice recognition have been introduced but with a limited impact. Their performance hasn’t yet reached the requirements with regards to cost, reliability, false rejection rate and false acceptance rate to significantly penetrate either the consumer, industrial or homeland/security markets....
More information on that report: http://www.i-micronews.com/report/product/sensors-for-biometry-and-recognition.html
The document discusses new capabilities of HTML5 including being 30% faster on mobile, supporting new web apps built on HTML5 that run on all browsers and devices. It notes that HTML5 runs faster on all browsers and devices, supports Apple iOS, and makes copying and pasting in the browser easier. It also mentions that HTML5 provides mobile distribution and works on all smart devices, and that 65% of mobile workers use tablets with 27% using them for work purposes.
NIPS読み会2013: One-shot learning by inverting a compositional causal processnozyh
This document summarizes research on one-shot learning using a hierarchical Bayesian program learning (HBPL) model. The key points are:
- The HBPL model achieved human-level performance on one-shot classification and generation tasks, outperforming other deep learning models.
- It used an Omniglot dataset of handwritten characters from different alphabets to learn concepts from a single example, as well as produce new examples.
- The model learns "motor programs" that represent common patterns in how symbols are drawn, from a library of primitive strokes. This allows it to generalize concepts from limited data.
This document describes a multi-camera time-of-flight imaging system that uses multiple synchronized cameras and light sources. It allows control over modulation signals to capture dynamic scenes and extract depth, velocity and non-line-of-sight motion information. The system architecture uses a direct digital synthesis chip to generate programmable modulation signals, synchronized signal conditioning circuits and a real-time controller to coordinate image capture across multiple time-of-flight cameras. It aims to enable applications like phased array imaging, simultaneous Doppler velocity capture and detecting motion behind scattering media.
This document presents a method for downsampling point cloud data to enable real-time scan matching for autonomous vehicles. It introduces two new downsampling algorithms: Ring Random Filter and Distance Voxel Grid Filter. It evaluates the algorithms based on execution time of scan matching, downsampling time, and relative error compared to raw point cloud data from tests in suburban and city environments. The results show the downsampling enables real-time scan matching with relative errors generally less than 10 cm.
This document discusses 3D sensing and reconstruction techniques including camera models, stereo triangulation, and structured light methods. It covers perspective projection, calibrating camera intrinsics and extrinsics, finding correspondences between stereo images using features or correlation, and reconstructing 3D points through triangulation. The key steps are calibrating cameras, identifying matching points in stereo images, and computing depth from disparity based on each camera's focal length and baseline distance between lenses.
For my thesis, I developed and compared a sequential CPU and parallel GPU implementation of a ray tracer written in C++ and CUDA respectively. Here are the presentation slides from my thesis defense.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2018-embedded-vision-summit-benosman
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Ryad B. Benosman, Professor at the University of Pittsburgh Medical Center, Carnegie Mellon University and Sorbonne Universitas, presents the "What is Neuromorphic Event-based Computer Vision? Sensors, Theory and Applications" tutorial at the May 2018 Embedded Vision Summit.
In this presentation, Benosman introduces neuromorphic, event-based approaches for image sensing and processing. State-of-the-art image sensors suffer from severe limitations imposed by their very principle of operation. These sensors acquire the visual information as a series of “snapshots” recorded at discrete point in time, hence time-quantized at a predetermined frame rate, resulting in limited temporal resolution, low dynamic range and a high degree of redundancy in the acquired data. Nature suggests a different approach: Biological vision systems are driven and controlled by events happening within the scene in view, and not – like conventional image sensors – by artificially created timing and control signals that have no relation to the source of the visual information.
Translating the frameless paradigm of biological vision to artificial imaging systems implies that control over the acquisition of visual information is no longer imposed externally on an array of pixels but rather the decision making is transferred to each individual pixel, which handles its own information individually. Benosman introduces the fundamentals underlying such bio-inspired, event-based image sensing and processing approaches, and explores their strengths and weaknesses. He shows that bio-inspired vision systems have the potential to outperform conventional, frame-based vision acquisition and processing systems and to establish new benchmarks in terms of data compression, dynamic range, temporal resolution and power efficiency in applications such as 3D vision, object tracking, motor control and visual feedback loops, in real-time.
A Fast Implicit Gaussian Curvature FilterYuanhao Gong
Minimizing Gaussian curvature is computationally expensive in traditional way. We present a new method that can minimize the Gaussian curvature without computing it. Our filter is 100 times faster than traditional solvers.
This document discusses high-speed, high-resolution inspection of flat panel displays. It introduces a distributed image sensor computing system (DISCS) that uses GPUs for parallel processing to enable fast, in-line inspection. The DISCS uses dark-field illumination and line scan cameras for 2D defect detection. Algorithms like binarization and edge detection are implemented on the GPUs. Experimental results on touch panels and glass show inspection of megapixel images in under 3 seconds. Stereoscopic line scanning and moire topography techniques are discussed for 3D surface profiling with nanometer resolution and micrometer depth detection. Phase shifting interferometry is used to extract height maps. The system is designed for industrial inspection and could integrate
Build Your Own 3D Scanner: 3D Scanning with Swept-PlanesDouglas Lanman
Build Your Own 3D Scanner:
3D Scanning with Swept-Planes
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
This document contains questions and answers related to GPS surveying techniques. It includes 15 multiple choice questions, 10 true/false statements, and 15 short answer questions about topics such as pseudo-ranges, satellite clock errors, sources of distance calculation errors in GPS, factors to consider when selecting a GPS survey method, real-time kinematic surveying, and types of GPS errors.
In this talk, we present results from the real-time raytracing research done at SEED, a cross-disciplinary team working on cutting-edge, future graphics technologies and creative experiences at Electronic Arts. We explain in detail several techniques from “PICA PICA”, a real-time raytracing experiment featuring a mini-game for self-learning AI agents in a procedurally-assembled world. The approaches presented here are intended to inspire developers and provide a glimpse of a future where real-time raytracing powers the creative experiences of tomorrow.
From STC (Stereo Camera onboard on Bepi Colombo ESA Mission) to BlenderEmanuele Simioni
- The document discusses the STC (Stereo Camera) instrument onboard the BepiColombo mission to Mercury.
- STC will provide global stereo mapping and digital terrain model reconstruction of Mercury's surface with a spatial resolution of 58.2 m/pixel at periherm.
- The instrument uses a push-frame design with two optical units to achieve a stereo angle of ±20° to provide stereo imagery for photogrammetric analysis.
- Calibration and validation efforts show the instrument is capable of generating DTMs with a vertical accuracy of 73 micrometers from test data, corresponding to 36 meters on Mercury's surface.
This document proposes using dual back-to-back Kinect sensors mounted on a robot to capture a 3D model of a large indoor scene. Traditionally, one Kinect is slid across an area, but this requires prominent features and careful handling. The dual Kinect setup requires calibrating the relative pose between the sensors. Since they do not share a view, traditional calibration is not possible. The authors place a dual-face checkerboard on top with a mirror to enable each Kinect to view the same calibration object. This allows estimating the pose between the sensors using a mirror-based algorithm. After capturing local models, the two Kinect views can be merged into a combined 3D model with a larger field of view.
This document describes research on 3D reconstruction of solder balls on printed circuit boards (PCBs). 360 X-ray images of a PCB were taken every 2.81 degrees and reconstructed using the simultaneous algebraic reconstruction technique (SART) and iterative algorithms to generate a 3D model. Unity software was used to build a 3D visualization with zoom and rotation capabilities. Google Cardboard VR was used to create a mobile application to view the 3D model. The reconstruction aims to detect defects in solder balls without damaging the PCBs.
This document analyzes KinectFusion, a real-time 3D reconstruction system using a moving depth camera. It introduces SLAMBench, a benchmarking framework for KinectFusion. The document describes the KinectFusion pipeline including preprocessing, tracking, integration and raycasting steps. It evaluates several RGB-D datasets and identifies the Washington RGB-D Scenes dataset as most suitable. It notes drawbacks in KinectFusion like noisy trajectories and inconsistent models. Future work proposed is reducing tracking noise using a Kalman filter.
Data Processing Using DubaiSat Satellite Imagery for Disaster Monitoring (Cas...NopphawanTamkuan
This content shows the specification of DubaiSat (UAE satellite), information of Hokkaido earthquake, data processing, pre-processing, pan-sharpening, Natural color composite, false color composite, NDVI calculation, image classification by clustering for the damaged area and landslide detection.
This document describes an indoor navigation Android application that uses Wi-Fi fingerprinting for localization and a routing algorithm to navigate between nodes on a map. It discusses challenges with GPS indoors and explores localization techniques including Wi-Fi, Bluetooth, and sensors. The application utilizes a SQLite database of Wi-Fi fingerprints mapped to locations, calculates the user's position by comparing live readings to stored values, and determines displacement using accelerometer and gyroscope data. It draws the user's position on a map and calculates a path between nodes using numbering to navigate between points of interest selected on the interface.
This document summarizes techniques for rendering water and frozen surfaces in CryEngine 2. It discusses procedural shaders for simulating water waves, caustics, god rays, shore foam, and frozen surface effects. It also covers techniques for water reflection, refraction, physics interaction, and camera interaction with water surfaces. Optimization strategies are discussed for minimizing draw calls and rendering costs.
Dynamic shear stress evaluation on micro turning tool using photoelasticitySoumen Mandal
The document presents an experimental method for evaluating shear stresses on a micro-turning tool using photoelasticity. A micro-turning tool was coated with a birefringent material and subjected to micro-turning of brass while capturing high-speed images. A custom-designed grey field poledioscope was used to obtain images under four analyzer orientations, which were processed to generate shear stress maps of the tool dynamically. The method allows monitoring of tool stresses during operation to prevent breakage and ensure desired performance.
Similar to MIRU2016 invited talk - Recovering Transparent Shape from Time-of-Flight Distortion (CVPR 2016) (20)
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Mechanisms and Applications of Antiviral Neutralizing Antibodies - Creative B...Creative-Biolabs
Neutralizing antibodies, pivotal in immune defense, specifically bind and inhibit viral pathogens, thereby playing a crucial role in protecting against and mitigating infectious diseases. In this slide, we will introduce what antibodies and neutralizing antibodies are, the production and regulation of neutralizing antibodies, their mechanisms of action, classification and applications, as well as the challenges they face.
Discovery of An Apparent Red, High-Velocity Type Ia Supernova at 𝐳 = 2.9 wi...Sérgio Sacani
We present the JWST discovery of SN 2023adsy, a transient object located in a host galaxy JADES-GS
+
53.13485
−
27.82088
with a host spectroscopic redshift of
2.903
±
0.007
. The transient was identified in deep James Webb Space Telescope (JWST)/NIRCam imaging from the JWST Advanced Deep Extragalactic Survey (JADES) program. Photometric and spectroscopic followup with NIRCam and NIRSpec, respectively, confirm the redshift and yield UV-NIR light-curve, NIR color, and spectroscopic information all consistent with a Type Ia classification. Despite its classification as a likely SN Ia, SN 2023adsy is both fairly red (
�
(
�
−
�
)
∼
0.9
) despite a host galaxy with low-extinction and has a high Ca II velocity (
19
,
000
±
2
,
000
km/s) compared to the general population of SNe Ia. While these characteristics are consistent with some Ca-rich SNe Ia, particularly SN 2016hnk, SN 2023adsy is intrinsically brighter than the low-
�
Ca-rich population. Although such an object is too red for any low-
�
cosmological sample, we apply a fiducial standardization approach to SN 2023adsy and find that the SN 2023adsy luminosity distance measurement is in excellent agreement (
≲
1
�
) with
Λ
CDM. Therefore unlike low-
�
Ca-rich SNe Ia, SN 2023adsy is standardizable and gives no indication that SN Ia standardized luminosities change significantly with redshift. A larger sample of distant SNe Ia is required to determine if SN Ia population characteristics at high-
�
truly diverge from their low-
�
counterparts, and to confirm that standardized luminosities nevertheless remain constant with redshift.
SDSS1335+0728: The awakening of a ∼ 106M⊙ black hole⋆Sérgio Sacani
Context. The early-type galaxy SDSS J133519.91+072807.4 (hereafter SDSS1335+0728), which had exhibited no prior optical variations during the preceding two decades, began showing significant nuclear variability in the Zwicky Transient Facility (ZTF) alert stream from December 2019 (as ZTF19acnskyy). This variability behaviour, coupled with the host-galaxy properties, suggests that SDSS1335+0728 hosts a ∼ 106M⊙ black hole (BH) that is currently in the process of ‘turning on’. Aims. We present a multi-wavelength photometric analysis and spectroscopic follow-up performed with the aim of better understanding the origin of the nuclear variations detected in SDSS1335+0728. Methods. We used archival photometry (from WISE, 2MASS, SDSS, GALEX, eROSITA) and spectroscopic data (from SDSS and LAMOST) to study the state of SDSS1335+0728 prior to December 2019, and new observations from Swift, SOAR/Goodman, VLT/X-shooter, and Keck/LRIS taken after its turn-on to characterise its current state. We analysed the variability of SDSS1335+0728 in the X-ray/UV/optical/mid-infrared range, modelled its spectral energy distribution prior to and after December 2019, and studied the evolution of its UV/optical spectra. Results. From our multi-wavelength photometric analysis, we find that: (a) since 2021, the UV flux (from Swift/UVOT observations) is four times brighter than the flux reported by GALEX in 2004; (b) since June 2022, the mid-infrared flux has risen more than two times, and the W1−W2 WISE colour has become redder; and (c) since February 2024, the source has begun showing X-ray emission. From our spectroscopic follow-up, we see that (i) the narrow emission line ratios are now consistent with a more energetic ionising continuum; (ii) broad emission lines are not detected; and (iii) the [OIII] line increased its flux ∼ 3.6 years after the first ZTF alert, which implies a relatively compact narrow-line-emitting region. Conclusions. We conclude that the variations observed in SDSS1335+0728 could be either explained by a ∼ 106M⊙ AGN that is just turning on or by an exotic tidal disruption event (TDE). If the former is true, SDSS1335+0728 is one of the strongest cases of an AGNobserved in the process of activating. If the latter were found to be the case, it would correspond to the longest and faintest TDE ever observed (or another class of still unknown nuclear transient). Future observations of SDSS1335+0728 are crucial to further understand its behaviour. Key words. galaxies: active– accretion, accretion discs– galaxies: individual: SDSS J133519.91+072807.4
JAMES WEBB STUDY THE MASSIVE BLACK HOLE SEEDSSérgio Sacani
The pathway(s) to seeding the massive black holes (MBHs) that exist at the heart of galaxies in the present and distant Universe remains an unsolved problem. Here we categorise, describe and quantitatively discuss the formation pathways of both light and heavy seeds. We emphasise that the most recent computational models suggest that rather than a bimodal-like mass spectrum between light and heavy seeds with light at one end and heavy at the other that instead a continuum exists. Light seeds being more ubiquitous and the heavier seeds becoming less and less abundant due the rarer environmental conditions required for their formation. We therefore examine the different mechanisms that give rise to different seed mass spectrums. We show how and why the mechanisms that produce the heaviest seeds are also among the rarest events in the Universe and are hence extremely unlikely to be the seeds for the vast majority of the MBH population. We quantify, within the limits of the current large uncertainties in the seeding processes, the expected number densities of the seed mass spectrum. We argue that light seeds must be at least 103 to 105 times more numerous than heavy seeds to explain the MBH population as a whole. Based on our current understanding of the seed population this makes heavy seeds (Mseed > 103 M⊙) a significantly more likely pathway given that heavy seeds have an abundance pattern than is close to and likely in excess of 10−4 compared to light seeds. Finally, we examine the current state-of-the-art in numerical calculations and recent observations and plot a path forward for near-future advances in both domains.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
Microbial interaction
Microorganisms interacts with each other and can be physically associated with another organisms in a variety of ways.
One organism can be located on the surface of another organism as an ectobiont or located within another organism as endobiont.
Microbial interaction may be positive such as mutualism, proto-cooperation, commensalism or may be negative such as parasitism, predation or competition
Types of microbial interaction
Positive interaction: mutualism, proto-cooperation, commensalism
Negative interaction: Ammensalism (antagonism), parasitism, predation, competition
I. Mutualism:
It is defined as the relationship in which each organism in interaction gets benefits from association. It is an obligatory relationship in which mutualist and host are metabolically dependent on each other.
Mutualistic relationship is very specific where one member of association cannot be replaced by another species.
Mutualism require close physical contact between interacting organisms.
Relationship of mutualism allows organisms to exist in habitat that could not occupied by either species alone.
Mutualistic relationship between organisms allows them to act as a single organism.
Examples of mutualism:
i. Lichens:
Lichens are excellent example of mutualism.
They are the association of specific fungi and certain genus of algae. In lichen, fungal partner is called mycobiont and algal partner is called
II. Syntrophism:
It is an association in which the growth of one organism either depends on or improved by the substrate provided by another organism.
In syntrophism both organism in association gets benefits.
Compound A
Utilized by population 1
Compound B
Utilized by population 2
Compound C
utilized by both Population 1+2
Products
In this theoretical example of syntrophism, population 1 is able to utilize and metabolize compound A, forming compound B but cannot metabolize beyond compound B without co-operation of population 2. Population 2is unable to utilize compound A but it can metabolize compound B forming compound C. Then both population 1 and 2 are able to carry out metabolic reaction which leads to formation of end product that neither population could produce alone.
Examples of syntrophism:
i. Methanogenic ecosystem in sludge digester
Methane produced by methanogenic bacteria depends upon interspecies hydrogen transfer by other fermentative bacteria.
Anaerobic fermentative bacteria generate CO2 and H2 utilizing carbohydrates which is then utilized by methanogenic bacteria (Methanobacter) to produce methane.
ii. Lactobacillus arobinosus and Enterococcus faecalis:
In the minimal media, Lactobacillus arobinosus and Enterococcus faecalis are able to grow together but not alone.
The synergistic relationship between E. faecalis and L. arobinosus occurs in which E. faecalis require folic acid
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
2. Transparent Objects
• Invisible, but distorted background can be seen.
• 3D reconstruction of transparent material is challenging.
3
Sensor
Estimated point
by triangulation
BackgroundReference
Distorted
3. Time-of-Flight (ToF) Camera
• Depth sensor based on time delay of light
• Kinect v2, Project Tango, etc.
4
time
Light signal
Observation
𝑡Δ
𝑑 =
𝑐𝑡Δ
2
(speed of light x time delay)
4. Time of Flight Distortion
• Speed of light slows down depending on refractive index.
• Depth becomes longer ( = ToF Distortion).
• We use this distortion for transparent shape recovery.
5
c
c
c
5. Contributions
6
1. ToF distortion can be used for transparent shape recovery.
2. Easy multi-path mitigation using retroreflective sheet.
c
c
c
6. Problem Setting
Input
• Known refractive index
• 1 distorted ToF depth
• 2 references (3D points)
7
r
f
b
r
t
v v
r
s
Output
• 3D points of both surfaces
• Surface normals
7. Parameters and Candidate Shapes
• Candidate shapes
• Front surface is on camera ray at distance 𝑡
• Back surface is on reference ray at distance 𝑠
• Many candidates. (2 degree of freedom)
8
ToF camera
𝑡
Glass object
Display or
known pattern
𝑠
8. Candidate Shape using ToF Distortion
• Candidate shapes
• Front surface is on camera ray at distance 𝑡
• Back surface is on reference ray at distance 𝑠
• such that 𝑠 + 𝑡 + 𝜂 = 𝑙 𝑇𝑜𝐹 : ToF distortion
• One degree of freedom
9
ToF camera
𝑡
Glass object
Display or
known pattern
𝑠
9. Surface Normal Consistency
• Surface normal is unique
Refractive normal
Geometric normal
• They should coincide.
10
ToF camera
𝑡
Glass object
Display or
known pattern
𝑠
𝑛 =
sin 𝜃1
sin 𝜃2
Refractive normal
camera ray
Geometric normal
10. Real world experiment setup
• modified Kinect v2 and LCD panel
11
Kinect v2
(IR Lens changed)
LCD panel
Linear stage
Target object
11. Results and evaluations
• Target materials and estimated results
• Evaluation
• Fit estimated points to ground-truth CAD model by ICP
12
Cube Wedge prism Schmidt prism
Object Mean error Std. dev.
Cube 0.188 mm 0.458 mm
Wedge 0.226 mm 1.137 mm
Schmidt 0.381 mm 1.398 mm
12. Summary
Input
• 1 distorted ToF depth
• 2 references (3D points)
• Known refractive index
13
r
f
b
r
t
v v
r
s
Output
• 3D points of both surfaces
• Surface normals
13. Time-of-Flight as alternative imager
• Light-in-flight [Gkioulekas+2015]
• Parameter tunable ToF camera (Texas instruments)
14
14. Imaging, Analyzing using ToF Camera
• Recently Emerging Topic
[Heide+2013], [Kadambi+2013], [Naik+2013], [Godbaz+2013], [Freedman+2014],
[Lin+2014], [O’Toole+2014], [Gupta+2015], [Heide+2015], [Xiao+2015],
[Kadambi+2015], [Peters+2015], [Tadano+2015], and more!
• CVPR 2016
• 1 oral, 2 posters (including ours)
[Kadambi et al.], [Su et al.]
• SIGGRAPH 2016
• 2 technical papers.
[Shrestha et al.], [Kadambi et al.]
15
We will continue working on ToF camera