This PowerPoint was created by Kate Shervais while working at UNAVCO to introduce the concept of Structure from Motion and show the different applications of the methodology.
The document discusses structure-from-motion, a photogrammetry technique that uses camera motion and overlapping photos to build 3D models of scenes. It outlines key parameters like camera specs, photo characteristics, and survey considerations. It then details best practices for field surveys using this method, including equipment selection, target placement, and GPS use to georeference models with sub-centimeter precision.
5. lecture 4 data capturing techniques - satellite and aerial imagesFenTaHun6
Satellite and aerial images can be used to collect cadastral data. Satellite images with resolution of less than 1 meter can be obtained for Ethiopia. Reference points are needed to georeference images and software like ArcGIS can be used to digitize parcel boundaries. While satellite images provide an overview, boundaries drawn from them require field confirmation. Aerial photos provide higher resolution but are more expensive to capture. Both image types require ground truthing due to potential errors and changes over time.
5. lecture 4 data capturing techniques - satellite and aerial imagesFenTaHun6
Satellite and aerial images can be used to collect cadastral data. Satellite images with resolution of less than 1 meter can be obtained for Ethiopia. Reference points are needed to georeference images and software like ArcGIS can be used to digitize parcel boundaries. While satellite images provide an overview, boundaries drawn from them require field confirmation. Aerial photos provide higher resolution and allow 3D modeling but are more expensive to capture. Both image types require accuracy checking and may be affected by changes, clouds, or obscured borders.
This presentation discusses stereoscopy and stereoscopes. It defines stereoscopy as using binocular vision to achieve 3D effects by viewing an object from two camera positions. Stereo pairs of photographs allow stereoscopic viewing in the overlapping portions. Various types of stereoscopes are described, including lens, mirror, scanning, and zoom stereoscopes. Lens stereoscopes are simplest and least expensive. Ground control points are also discussed as features with known coordinates that establish the relationship between an image and the ground, allowing georeferencing of aerial photographs.
1. Geometric distortions are inherent in remote sensing images and can occur due to several factors including the sensor optics and platform stability.
2. Sources of errors include the perspective of sensor optics, motion and orientation of the scanning system, stability of the platform, platform altitude and attitude, terrain relief, and Earth's curvature and rotation.
3. In mountainous regions, a satellite-based scanning system would be preferable to an aircraft-based system due to less amplification of geometric distortions from relief displacement and shadowing effects at satellite altitudes.
This presentation discusses scales used in photographs. It explains that scale is the ratio of an object's size in a photo to its actual size on the ground. Scale can be expressed through a unit equivalent, representative fraction, or ratio. Knowing the camera focal length and aircraft altitude allows one to determine the scale of a vertical photograph. The presentation was given by Mr. Amol V. Ghogare of SRES, SCOE, Kopargaon on the topic of scales used in photographs.
Remote sensing involves observing the Earth from a distance using aerial photographs or satellite imagery. Aerial photographs provide detailed views from low altitudes but limited coverage, while satellite images offer global coverage but less detail. Remote sensing data can be used to map features, monitor changes over time, and classify land cover by analyzing spectral signatures from multi-spectral imagery. Digital tools allow remote sensing data to be manipulated for 3D modeling and analysis in geographic information systems.
This document discusses stereoscopic vision and its use in aerial photo interpretation. Stereoscopic vision involves using binocular vision to view overlapping photos from two camera positions to perceive 3D depth. Various stereoscopes can be used, like lens stereoscopes suitable for field use. Key measurements for determining object heights from stereo pairs include the average photo base length and differential parallax. Precise stereoplotters and software can digitally recreate stereo models for mapping. Orthophotos rectify photos to show objects in true planimetric positions.
The document discusses structure-from-motion, a photogrammetry technique that uses camera motion and overlapping photos to build 3D models of scenes. It outlines key parameters like camera specs, photo characteristics, and survey considerations. It then details best practices for field surveys using this method, including equipment selection, target placement, and GPS use to georeference models with sub-centimeter precision.
5. lecture 4 data capturing techniques - satellite and aerial imagesFenTaHun6
Satellite and aerial images can be used to collect cadastral data. Satellite images with resolution of less than 1 meter can be obtained for Ethiopia. Reference points are needed to georeference images and software like ArcGIS can be used to digitize parcel boundaries. While satellite images provide an overview, boundaries drawn from them require field confirmation. Aerial photos provide higher resolution but are more expensive to capture. Both image types require ground truthing due to potential errors and changes over time.
5. lecture 4 data capturing techniques - satellite and aerial imagesFenTaHun6
Satellite and aerial images can be used to collect cadastral data. Satellite images with resolution of less than 1 meter can be obtained for Ethiopia. Reference points are needed to georeference images and software like ArcGIS can be used to digitize parcel boundaries. While satellite images provide an overview, boundaries drawn from them require field confirmation. Aerial photos provide higher resolution and allow 3D modeling but are more expensive to capture. Both image types require accuracy checking and may be affected by changes, clouds, or obscured borders.
This presentation discusses stereoscopy and stereoscopes. It defines stereoscopy as using binocular vision to achieve 3D effects by viewing an object from two camera positions. Stereo pairs of photographs allow stereoscopic viewing in the overlapping portions. Various types of stereoscopes are described, including lens, mirror, scanning, and zoom stereoscopes. Lens stereoscopes are simplest and least expensive. Ground control points are also discussed as features with known coordinates that establish the relationship between an image and the ground, allowing georeferencing of aerial photographs.
1. Geometric distortions are inherent in remote sensing images and can occur due to several factors including the sensor optics and platform stability.
2. Sources of errors include the perspective of sensor optics, motion and orientation of the scanning system, stability of the platform, platform altitude and attitude, terrain relief, and Earth's curvature and rotation.
3. In mountainous regions, a satellite-based scanning system would be preferable to an aircraft-based system due to less amplification of geometric distortions from relief displacement and shadowing effects at satellite altitudes.
This presentation discusses scales used in photographs. It explains that scale is the ratio of an object's size in a photo to its actual size on the ground. Scale can be expressed through a unit equivalent, representative fraction, or ratio. Knowing the camera focal length and aircraft altitude allows one to determine the scale of a vertical photograph. The presentation was given by Mr. Amol V. Ghogare of SRES, SCOE, Kopargaon on the topic of scales used in photographs.
Remote sensing involves observing the Earth from a distance using aerial photographs or satellite imagery. Aerial photographs provide detailed views from low altitudes but limited coverage, while satellite images offer global coverage but less detail. Remote sensing data can be used to map features, monitor changes over time, and classify land cover by analyzing spectral signatures from multi-spectral imagery. Digital tools allow remote sensing data to be manipulated for 3D modeling and analysis in geographic information systems.
This document discusses stereoscopic vision and its use in aerial photo interpretation. Stereoscopic vision involves using binocular vision to view overlapping photos from two camera positions to perceive 3D depth. Various stereoscopes can be used, like lens stereoscopes suitable for field use. Key measurements for determining object heights from stereo pairs include the average photo base length and differential parallax. Precise stereoplotters and software can digitally recreate stereo models for mapping. Orthophotos rectify photos to show objects in true planimetric positions.
This document discusses tilted aerial photographs. It begins by defining tilted photographs as those where the camera axis is slightly angled from vertical when capturing the image, usually by less than 3 degrees. It then introduces exterior orientation parameters (EOPs) that define the spatial position and angular orientation of each photograph. Two systems for defining angular orientation are described: tilt-swing-azimuth and omega-phi-kappa. Perspective projection and how it relates 3D objects to their 2D image is also overviewed. The remainder of the document discusses how to calculate scale on tilted photographs based on factors like tilt, swing, height, and elevation.
This document provides an overview of the history and development of remote sensing from satellites. It discusses early satellite and manned spacecraft imaging from the 1960s with limited coverage. Meteorological satellites in the 1960s showed potential for remote sensing. Landsat, beginning in 1972, was the first satellite system designed for land remote sensing with global coverage and repeated visits. It included improvements over generations in spatial, spectral, and radiometric resolution. Other satellite systems discussed include SPOT, IKONOS, and instruments onboard the Terra satellite.
Centre Of Geographic Sciences Remote Sensing Field Camp 2015COGS Presentations
Students from the 2015 Centre of Geographic Sciences Remote Sensing Field Camp will collect aerial photographs of the COGS grounds using GoPro cameras to create an orthomosaic with 50cm resolution by May 22nd. They will also collect LiDAR scans of the Sinclair Inn in Annapolis Royal using a Faro scanner to create a georeferenced point cloud and 3D model with 10cm point density accuracy by the same date. The document outlines the preparation, data collection, post-processing, and validation steps for both the aerial photography and terrestrial laser scanning projects.
This document discusses methods for calculating the heights of objects like trees and buildings from aerial photos. It describes the relief/radial displacement method, where the displacement between the top and bottom of an object seen in a single aerial photo is used along with the distance from the principal point to determine height. It explains that relief displacement occurs due to perspective projection and varies with object elevation relative to the datum. An example problem demonstrates using measured displacement and distance to calculate an object's height given the flying height.
Centre of Geographic Sciences Remote Sensing Field Camp 2015COGS Presentations
Students from the 2015 Centre of Geographic Sciences Remote Sensing Field Camp will collect aerial photographs of the COGS grounds using GoPro cameras to create an orthomosaic with 50cm resolution by May 22nd. They will also collect LiDAR scans of Sinclair Inn in Annapolis Royal using a Faro scanner to create a georeferenced point cloud and 3D model with 10cm point density accuracy. The document outlines the preparation, data collection, post-processing, and validation steps for both the aerial photography and Faro scanning projects.
This week at Oceanology Americas we presented a paper on SLAM and Optimal Sensor Fusion and outlined how we have implemented this within our real-time navigation and 3D reconstruction tool, 3D Recon.
We have just assembled two 4,000m rated 3D Recon systems. One of these systems is currently undergoing pressure cycle testing while the other is undergoing extensive burn-in testing to ensure long term viability.
We expect to have test tank data later in March, so if you'd like to receive some sample data sets please let us know at sales@zupt.com.
GPS stands for Global Positioning System. It is a satellite-based navigation system consisting of three segments: the space segment with 24 satellites, the control segment that monitors and controls the satellites, and the user segment where receivers calculate their position. GPS was developed by the US Department of Defense over 20 years and became fully operational in 1995, allowing civilian use. It is now used widely for navigation in vehicles, outdoor activities, and location-based services on phones.
CORRECTION OF SPATIAL ERRORS IN SMOS BRIGHTNESS TEMPERATURE IMAGES.pptgrssieee
The document discusses the correction of spatial errors in SMOS brightness temperature images. Spatial errors are modeled as having different "gains" or "offsets" at each grid point. A mask is estimated from analyzing measurements over the ocean to minimize other variations. The mask is applied to measurements as a direction-dependent factor to correct for systematic spatial distortions and reduce residuals, improving salinity retrievals. Results show the mask reduces errors in Level 1B and 1C data.
This document discusses stereoscopic parallax and its use in photogrammetry. Stereoscopic parallax is the apparent shift in position of an object's image between two overlapping photographs taken from different positions. The amount of parallax is directly related to the object's elevation - higher objects have greater parallax. Parallax can be measured directly on the photographs or using a stereoscope. The parallax measurements can then be used in trigonometric equations to calculate the ground coordinates and elevations of points visible in the stereo pair. These parallax methods provide a fundamental way to determine elevations from aerial photographs.
The GPS consists of 3 segments - the space segment of 24 satellites orbiting Earth, the control segment of ground stations monitoring the satellites, and the user segment of GPS receivers. GPS satellites continuously transmit radio signals allowing receivers to calculate their location on Earth by triangulating signals from at least 3 satellites. Originally intended for military use, GPS is now widely used for civilian navigation in vehicles, hiking, boating and more.
1) The document analyzes ground deformation from the 2010 El Mayor, Mexico earthquake using pre- and post-event satellite images.
2) It finds displacements of approximately 2 meters east-west and 1.5 meters south-north along the fault line through sub-pixel correlation analysis.
3) The results are comparable to field survey measurements and indicate a maximum right-lateral strike slip along the fault from the earthquake.
Global Positioning System (GPS) is a satellite-based navigation system consisting of a network of 24 satellites placed into orbit by the U.S. Department of Defense. GPS allows land, sea, and airborne users to determine their exact location, velocity, and time 24 hours a day, in all weather conditions, anywhere in the world. The GPS uses trilateration to calculate a user's position by comparing times from at least three satellites, and it provides accuracy to within a few meters. GPS has many applications including navigation, construction, mining, military uses, and everyday uses on phones and in cars.
GPS is a satellite-based navigation system consisting of 24 satellites used worldwide to determine precise locations. It was developed by the U.S. Department of Defense and first launched satellites in 1974 for military purposes. GPS works by triangulating the distance and timing signals from multiple satellites to determine a receiver's position. It has many uses including navigation in vehicles, boats and planes as well as applications on smartphones.
Global Positioning Systems (GPS) can be used for navigation, surveying, and precise timekeeping. GPS works by using satellites that transmit encoded signals to receivers, which then use triangulation to determine their location. However, GPS has limitations - it is a one-way system so additional equipment is needed for tracking, signals cannot penetrate buildings or underground, and it only provides location data without navigation directions. The accuracy of GPS locations can be affected by factors like receiver quality, multipath issues, and satellite health.
This slide was presented as a course work of Electronic Communication Sessional (EEE 356) during Level-3 Term-1, on 13th September, 2013 at DEEE Simulation Lab. of Chittagong University of Engineering & Technology (CUET).
The course was supervised by Dr. Muhammad Ahsan Ullah and Mr. Rashed Md. Murad Hasan sir.
The team members of this presentation were -
Md.Hasanul Azim, Sakib Reza, Md. Adibul Islam, Rajib Ghose, Irfan Uddin and Md. Amimul Ehsan (me).
For better understanding of the slide, it is required to watch videos of following play list.
videos link- https://www.youtube.com/playlist?list=PL6XcKesbXKlTx_HNIZbA4BEOlHhaE1ZwR
The document summarizes the Global Positioning System (GPS). It describes GPS as a space-based satellite navigation system that provides location and time information anywhere on Earth. It has three main parts: the space segment consisting of 31 active satellites, the control segment of Earth stations that send and receive data via satellite, and the user segment of GPS receivers that detect and process satellite signals to provide location outputs. GPS works by receivers calculating their distance from multiple satellites to determine their precise position.
Land Surveying is a method for finding proper distances, angels and points on earth’s Surfaces. These methods used for making maps and measuring the area. To find the proper points land surveyors use tools such as GPS, Total Stations, Digital Laser Levels and 3D Scanners. GPS (Global Positioning System) is the main tool in Land Surveying. If you are looking for a one time use or you just start your company and want to invest in such tools, you can purchase or hire a branded Used GPS For Sale In UAE from Falcon Geomatics.
The document discusses different types of map projections used to represent the spherical Earth on a flat surface. It begins by explaining that map projections transform 3D global coordinates to 2D planar coordinates, which necessarily distorts properties like distances, angles, or areas. It then covers key projection categories (cylindrical, conic, azimuthal), their characteristic properties and examples. Specific projections discussed include Mercator, UTM, and polar stereographic. The document emphasizes that the appropriate projection depends on the map's intended use and which distortions are least important. It encourages map users to understand basic projection concepts.
is a range imaging technique; it refers to the process of estimating three-dimensional structures from two-dimensional image sequences which may be coupled with local motion signals
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/videantis/embedded-vision-training/videos/pages/may-2015-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Marco Jacobs, Vice President of Marketing at videantis, presents the "3D from 2D: Theory, Implementation, and Applications of Structure from Motion" tutorial at the May 2015 Embedded Vision Summit.
Structure from motion uses a unique combination of algorithms that extract depth information using a single 2D moving camera. Using a calibrated camera, feature detection, and feature tracking, the algorithms calculate an accurate camera pose and a 3D point cloud representing the surrounding scene.
This 3D scene information can be used in many ways, such as for automated car parking, augmented reality, and positioning. Marco introduces the theory behind structure from motion, provides some representative applications that use it, and explores an efficient implementation for embedded applications.
This document discusses tilted aerial photographs. It begins by defining tilted photographs as those where the camera axis is slightly angled from vertical when capturing the image, usually by less than 3 degrees. It then introduces exterior orientation parameters (EOPs) that define the spatial position and angular orientation of each photograph. Two systems for defining angular orientation are described: tilt-swing-azimuth and omega-phi-kappa. Perspective projection and how it relates 3D objects to their 2D image is also overviewed. The remainder of the document discusses how to calculate scale on tilted photographs based on factors like tilt, swing, height, and elevation.
This document provides an overview of the history and development of remote sensing from satellites. It discusses early satellite and manned spacecraft imaging from the 1960s with limited coverage. Meteorological satellites in the 1960s showed potential for remote sensing. Landsat, beginning in 1972, was the first satellite system designed for land remote sensing with global coverage and repeated visits. It included improvements over generations in spatial, spectral, and radiometric resolution. Other satellite systems discussed include SPOT, IKONOS, and instruments onboard the Terra satellite.
Centre Of Geographic Sciences Remote Sensing Field Camp 2015COGS Presentations
Students from the 2015 Centre of Geographic Sciences Remote Sensing Field Camp will collect aerial photographs of the COGS grounds using GoPro cameras to create an orthomosaic with 50cm resolution by May 22nd. They will also collect LiDAR scans of the Sinclair Inn in Annapolis Royal using a Faro scanner to create a georeferenced point cloud and 3D model with 10cm point density accuracy by the same date. The document outlines the preparation, data collection, post-processing, and validation steps for both the aerial photography and terrestrial laser scanning projects.
This document discusses methods for calculating the heights of objects like trees and buildings from aerial photos. It describes the relief/radial displacement method, where the displacement between the top and bottom of an object seen in a single aerial photo is used along with the distance from the principal point to determine height. It explains that relief displacement occurs due to perspective projection and varies with object elevation relative to the datum. An example problem demonstrates using measured displacement and distance to calculate an object's height given the flying height.
Centre of Geographic Sciences Remote Sensing Field Camp 2015COGS Presentations
Students from the 2015 Centre of Geographic Sciences Remote Sensing Field Camp will collect aerial photographs of the COGS grounds using GoPro cameras to create an orthomosaic with 50cm resolution by May 22nd. They will also collect LiDAR scans of Sinclair Inn in Annapolis Royal using a Faro scanner to create a georeferenced point cloud and 3D model with 10cm point density accuracy. The document outlines the preparation, data collection, post-processing, and validation steps for both the aerial photography and Faro scanning projects.
This week at Oceanology Americas we presented a paper on SLAM and Optimal Sensor Fusion and outlined how we have implemented this within our real-time navigation and 3D reconstruction tool, 3D Recon.
We have just assembled two 4,000m rated 3D Recon systems. One of these systems is currently undergoing pressure cycle testing while the other is undergoing extensive burn-in testing to ensure long term viability.
We expect to have test tank data later in March, so if you'd like to receive some sample data sets please let us know at sales@zupt.com.
GPS stands for Global Positioning System. It is a satellite-based navigation system consisting of three segments: the space segment with 24 satellites, the control segment that monitors and controls the satellites, and the user segment where receivers calculate their position. GPS was developed by the US Department of Defense over 20 years and became fully operational in 1995, allowing civilian use. It is now used widely for navigation in vehicles, outdoor activities, and location-based services on phones.
CORRECTION OF SPATIAL ERRORS IN SMOS BRIGHTNESS TEMPERATURE IMAGES.pptgrssieee
The document discusses the correction of spatial errors in SMOS brightness temperature images. Spatial errors are modeled as having different "gains" or "offsets" at each grid point. A mask is estimated from analyzing measurements over the ocean to minimize other variations. The mask is applied to measurements as a direction-dependent factor to correct for systematic spatial distortions and reduce residuals, improving salinity retrievals. Results show the mask reduces errors in Level 1B and 1C data.
This document discusses stereoscopic parallax and its use in photogrammetry. Stereoscopic parallax is the apparent shift in position of an object's image between two overlapping photographs taken from different positions. The amount of parallax is directly related to the object's elevation - higher objects have greater parallax. Parallax can be measured directly on the photographs or using a stereoscope. The parallax measurements can then be used in trigonometric equations to calculate the ground coordinates and elevations of points visible in the stereo pair. These parallax methods provide a fundamental way to determine elevations from aerial photographs.
The GPS consists of 3 segments - the space segment of 24 satellites orbiting Earth, the control segment of ground stations monitoring the satellites, and the user segment of GPS receivers. GPS satellites continuously transmit radio signals allowing receivers to calculate their location on Earth by triangulating signals from at least 3 satellites. Originally intended for military use, GPS is now widely used for civilian navigation in vehicles, hiking, boating and more.
1) The document analyzes ground deformation from the 2010 El Mayor, Mexico earthquake using pre- and post-event satellite images.
2) It finds displacements of approximately 2 meters east-west and 1.5 meters south-north along the fault line through sub-pixel correlation analysis.
3) The results are comparable to field survey measurements and indicate a maximum right-lateral strike slip along the fault from the earthquake.
Global Positioning System (GPS) is a satellite-based navigation system consisting of a network of 24 satellites placed into orbit by the U.S. Department of Defense. GPS allows land, sea, and airborne users to determine their exact location, velocity, and time 24 hours a day, in all weather conditions, anywhere in the world. The GPS uses trilateration to calculate a user's position by comparing times from at least three satellites, and it provides accuracy to within a few meters. GPS has many applications including navigation, construction, mining, military uses, and everyday uses on phones and in cars.
GPS is a satellite-based navigation system consisting of 24 satellites used worldwide to determine precise locations. It was developed by the U.S. Department of Defense and first launched satellites in 1974 for military purposes. GPS works by triangulating the distance and timing signals from multiple satellites to determine a receiver's position. It has many uses including navigation in vehicles, boats and planes as well as applications on smartphones.
Global Positioning Systems (GPS) can be used for navigation, surveying, and precise timekeeping. GPS works by using satellites that transmit encoded signals to receivers, which then use triangulation to determine their location. However, GPS has limitations - it is a one-way system so additional equipment is needed for tracking, signals cannot penetrate buildings or underground, and it only provides location data without navigation directions. The accuracy of GPS locations can be affected by factors like receiver quality, multipath issues, and satellite health.
This slide was presented as a course work of Electronic Communication Sessional (EEE 356) during Level-3 Term-1, on 13th September, 2013 at DEEE Simulation Lab. of Chittagong University of Engineering & Technology (CUET).
The course was supervised by Dr. Muhammad Ahsan Ullah and Mr. Rashed Md. Murad Hasan sir.
The team members of this presentation were -
Md.Hasanul Azim, Sakib Reza, Md. Adibul Islam, Rajib Ghose, Irfan Uddin and Md. Amimul Ehsan (me).
For better understanding of the slide, it is required to watch videos of following play list.
videos link- https://www.youtube.com/playlist?list=PL6XcKesbXKlTx_HNIZbA4BEOlHhaE1ZwR
The document summarizes the Global Positioning System (GPS). It describes GPS as a space-based satellite navigation system that provides location and time information anywhere on Earth. It has three main parts: the space segment consisting of 31 active satellites, the control segment of Earth stations that send and receive data via satellite, and the user segment of GPS receivers that detect and process satellite signals to provide location outputs. GPS works by receivers calculating their distance from multiple satellites to determine their precise position.
Land Surveying is a method for finding proper distances, angels and points on earth’s Surfaces. These methods used for making maps and measuring the area. To find the proper points land surveyors use tools such as GPS, Total Stations, Digital Laser Levels and 3D Scanners. GPS (Global Positioning System) is the main tool in Land Surveying. If you are looking for a one time use or you just start your company and want to invest in such tools, you can purchase or hire a branded Used GPS For Sale In UAE from Falcon Geomatics.
The document discusses different types of map projections used to represent the spherical Earth on a flat surface. It begins by explaining that map projections transform 3D global coordinates to 2D planar coordinates, which necessarily distorts properties like distances, angles, or areas. It then covers key projection categories (cylindrical, conic, azimuthal), their characteristic properties and examples. Specific projections discussed include Mercator, UTM, and polar stereographic. The document emphasizes that the appropriate projection depends on the map's intended use and which distortions are least important. It encourages map users to understand basic projection concepts.
is a range imaging technique; it refers to the process of estimating three-dimensional structures from two-dimensional image sequences which may be coupled with local motion signals
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/videantis/embedded-vision-training/videos/pages/may-2015-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Marco Jacobs, Vice President of Marketing at videantis, presents the "3D from 2D: Theory, Implementation, and Applications of Structure from Motion" tutorial at the May 2015 Embedded Vision Summit.
Structure from motion uses a unique combination of algorithms that extract depth information using a single 2D moving camera. Using a calibrated camera, feature detection, and feature tracking, the algorithms calculate an accurate camera pose and a 3D point cloud representing the surrounding scene.
This 3D scene information can be used in many ways, such as for automated car parking, augmented reality, and positioning. Marco introduces the theory behind structure from motion, provides some representative applications that use it, and explores an efficient implementation for embedded applications.
A new remote sensing methodology for detailed international mapping in the V4...mirijovsky
The document proposes a new project to use unmanned aerial vehicles (UAVs) for detailed mapping and monitoring in countries of the Visegrad Group (V4) region. The main goals are to incorporate UAV technology for precise data collection and analysis. Key target areas include lowland regions and floodplains near rivers. The project would study areas after floods, monitor landslides and slope stability, assess biomass, support border demarcation, and document archaeological sites. Two UAV models, Pixy and Hexacopter XL, are presented with specifications for spatial data collection between 1 cm and 30 cm resolution depending on altitude and area coverage. Photogrammetric processing methods like structure from motion and stereophotogram
This document provides an introduction to analyzing fault scarps to determine information about past earthquakes. It discusses different types of scarps, how scarp morphology evolves over time, and provides examples of scarps from past earthquakes. It also outlines methods for extracting scarp profiles, relating rupture length to earthquake magnitude, calculating recurrence intervals using erosion rates, and modeling hillslope diffusion of scarps over time.
This document discusses the application of unmanned aerial system (UAS) photogrammetry for assessing flood-driven changes in a montane stream. Key points:
1) UAS photogrammetry was used to acquire high-resolution spatial data before and after a 2013 flood to analyze geomorphological changes, including bank erosion volumes.
2) Optical granulometry using UAS images automatically identified and classified coarse sediments to derive grain size curves characterizing fresh and old fluvial deposits.
3) Results from a case study on the Roklanský and Javoří brooks show up to 2.5m of lateral bank erosion after the 2013 flood, with over 3,000
This document provides an introduction to sequence stratigraphy, including key concepts such as accommodation space, cyclical deposition, parasequences, and Walter's Law. Students are instructed to analyze a sedimentary section by identifying parasequences and their components, measuring bed thicknesses, and interpreting the sedimentation history. The images are provided by Steve Holland of University of Georgia and require separate permission to use outside this learning module.
Photogrammetry is the science of obtaining reliable information about physical objects through analyzing photographic images. It involves recording, measuring, and interpreting photographs and electromagnetic radiation. There are two main types: aerial photogrammetry which uses photographs taken from aircraft, and terrestrial photogrammetry which uses ground-based photos. Photogrammetry is used to produce topographic maps and digital terrain models for purposes like architecture, engineering, archaeology and more.
Photogrammetry Survey- Surveying II , Civil Engineering StudentsPramesh Hada
Photogrammetry is the science of making measurements from photographs. It involves planning and taking photographs, processing the photographs, and measuring the photographs to produce results like maps and 3D models.
There are two main types of photographs used in photogrammetry - terrestrial and aerial photographs. Terrestrial photographs are taken from ground-based camera stations using a phototheodolite. Aerial photographs are taken from an airborne camera mounted on an aircraft and can be vertical or oblique.
Key applications of photogrammetry include topographic mapping, engineering surveys, geological mapping, and urban and regional planning due to its ability to cover large areas quickly and accurately.
Structure and Motion - 3D Reconstruction of Cameras and StructureGiovanni Murru
The document discusses structure from motion reconstruction from multiple images. It provides an overview of the steps to:
1. Estimate camera motion and 3D structure from a sequence of images using a stratified approach, starting with projective reconstruction and refining to affine and metric reconstruction.
2. Reconstruct structure and motion for two datasets - a public dataset and a personal dataset acquired by the student.
3. The key steps are feature detection, matching, estimating the fundamental matrix, triangulating 3D points, identifying the plane at infinity to upgrade from projective to affine reconstruction, and further refinement to metric reconstruction if possible.
This document provides an overview of photogrammetry, including a brief history of aerial photography, definitions of key terms, and descriptions of different types of photogrammetry and imaging. It discusses the general photogrammetric process and products that can be created. Specific topics covered include the development of aerial photography from the 1850s onwards, definitions of photogrammetry, close range, terrestrial, aerial, and space photogrammetry, types of aerial images, photogrammetric mapping techniques, and historical photogrammetric plotting instruments.
The document provides an overview of photogrammetry, which is the science and technology of obtaining reliable spatial information about physical objects and the environment through analyzing photographs. It discusses the different types of photogrammetry including aerial/spaceborne photogrammetry and close-range photogrammetry. It also summarizes the key techniques, applications, and products of photogrammetry such as digital terrain models, orthophotos, and 3D models.
Photogrammetry is the science of making measurements from photographs, especially to determine the exact positions of surface points. It involves planning and taking photographs, processing the photographs, and measuring the photographs to produce results like maps. Photogrammetry can be used for topographic surveys, engineering surveys, geological mapping, and urban and regional planning applications. There are two main types of photographs used in photogrammetry: terrestrial photographs taken from fixed positions on the ground using a phototheodolite, and aerial photographs taken from an aerial camera mounted on an aircraft.
Introduction to aerial photography and photogrammetry.pptsrinivas2036
Aerial photography and photogrammetry are techniques used in remote sensing. Aerial photography involves taking photographs from aircraft and has been used since the 1850s. Photogrammetry uses photographs to measure and obtain spatial information about the objects and terrain photographed. It allows for the creation of topographic maps, cadastral maps, and large-scale construction plans more quickly and economically than traditional ground-based surveying. While aerial photography and photogrammetry provide advantages over field surveys, some on-site control and verification is still needed.
Photogrammetry is the science of obtaining reliable measurements from photographs. There are three main techniques: aerial, using vertically downward photos from planes or satellites; terrestrial, using horizontal photos on the ground; and industrial, adapting terrestrial techniques to small areas. Aerial photos are used for topographic mapping, cadastral plans, land use maps, and hydrographic charts. Stereo plotters allow precise 3D measurement and analysis from stereo photo pairs. Photogrammetry has many applications beyond traditional surveying, including traffic accident reconstruction, medical imaging, and analysis of surface movement.
This document summarizes the principles of photogrammetry. It discusses the basic elements of photogrammetry including obtaining quantitative information from aerial photographs. It covers topics such as photographic scale, horizontal ground coordinates, relief displacement, exterior orientation of tilted photographs, stereoscopic vision, and the geometry of aerial stereophotographs. The purpose is to provide background information and references to support standards and guidelines for photogrammetric mapping.
This document provides an overview of surveying concepts and techniques. It discusses:
1) The definitions, classifications, instruments, and methods used in surveying like chain surveying, compass surveying, plane table surveying, and total station surveying.
2) The objectives of surveying which include preparing maps, plans and transferring details to mark locations on the ground for engineering projects.
3) The primary divisions of surveying into plain surveying which ignores curvature of the earth, and geodetic surveying which accounts for curvature over large areas.
4) Fundamental surveying principles like working from the whole to parts, and locating new points using two measurements from fixed references.
The document discusses different types of search engines. It describes search engines as programs that use keywords to search websites and return relevant results. It provides examples of popular search engines like Google, Yahoo, and Ask.com. It also explains different types of search engines such as crawler-based, directory-based, specialty, hybrid, and meta search engines. Finally, it discusses how to effectively use search engines through techniques like being specific, using symbols like + and -, and using Boolean searches.
This document presents a summary of a research paper on shape from focus. Shape from focus is a technique that uses differences in focus levels across a series of images to obtain depth information and reconstruct the 3D shape of an object. The paper develops a sum-modified Laplacian (SML) operator to provide local measures of image focus quality. The SML operator is applied to images captured at different focus levels to determine focus measures. A depth estimation algorithm then interpolates the focus measures to obtain accurate depth estimates for each point. Results show the SML operator provides robust focus measures and the overall shape from focus approach can effectively reconstruct shapes, making it suitable for challenging visual inspection problems.
Fundamentals of Remote Sensing- A training moduleNishant Sinha
This document provides an overview of fundamentals of remote sensing. It begins by defining remote sensing as acquiring information about an object without physical contact. It then discusses various aspects of remote sensing including the basic components of a remote sensing system, examples of early and modern remote sensing applications, different sensor types and resolutions. The document also covers topics such as raster data models, file formats for raster data, imagery types, preprocessing techniques including radiometric and geometric corrections, image enhancement methods, image classification approaches, and principles of image interpretation.
Urban 3D Semantic Modelling Using Stereo Vision, ICRA 2013Sunando Sengupta
1) Given a sequence of stereo images, the pipeline generates a dense 3D semantic model of the urban environment.
2) Depth maps are generated from stereo images and fused into a volumetric representation using camera poses from feature tracking.
3) Semantic segmentation of street view images is done using a CRF model, and labels are projected onto the 3D model faces to generate the semantic model.
4) The semantic model is evaluated by projecting it back to the input images and calculating metrics like recall and intersection over union. Future work includes real-time implementation and combining image and geometric context.
Photogrammetry is the science of obtaining spatial measurements from photographs. This document discusses key concepts in photogrammetry including:
- The differences between maps and aerial photographs in terms of projections, scales, and representations.
- Basic photogrammetry principles including exterior orientation to relate image coordinates to real-world coordinates using ground control points.
- Interior orientation to model the camera geometry and establish relationships between pixel coordinates and image coordinates.
- Calculating scanning resolution for digitizing aerial photographs to achieve a desired ground resolution for orthophotos.
- Photogrammetric triangulation to compute camera station positions and orientations using measured image tie points and ground control points.
10-Image rectification and restoration.pptAJAYMALIK97
This document discusses digital image processing. It defines digital image processing as the computer-based manipulation and interpretation of digital images. It outlines seven broad types of computer-assisted operations, including image rectification and restoration, image enhancement, image classification, data merging and GIS integration, hyperspectral image analysis, biophysical modeling, and image transmission and compression. It provides details on image rectification and restoration, which involves preprocessing to correct distorted or degraded image data through techniques like geometric distortions, radiometric calibration, and noise elimination.
Satellite image processing is a technique to enhance raw images received from cameras or sensors placed on satellites, space probes and aircrafts or pictures taken in normal day to day life in various applications. The process of creating thematic maps as spatial distribution of particular information. These are structured by Spectral Bands. These have constant density and when they overlap their densities get added. It performs image analysis on multiple scale images and catches the comprehensive information of system for different application. Examples of themes are soil, vegetation, water-depth and air. The supervising of such critical events requires a huge volume of surveillance data and extremely powerful real time processing for infrastructure
Remote sensing involves acquiring data about objects through measurements made at a distance without direct contact. It uses sensors on platforms like satellites and aircraft to measure electromagnetic radiation reflected or emitted from the Earth's surface. There are various sensor types that measure different portions of the electromagnetic spectrum. Image processing involves enhancing images and extracting information through techniques like pre-processing, classification, and change detection. Pre-processing corrects errors and artifacts in raw images through steps like radiometric correction, geometric correction, and atmospheric correction. Classification involves categorizing pixels into land cover classes using methods like supervised classification, which relies on user-defined training data, and unsupervised classification, which groups pixels automatically.
This document describes a method for exponential contrast restoration of images captured during fog conditions to improve visibility for driving assistance systems. It begins with an introduction to how fog degrades image quality and decreases visibility distance. It then describes Koschmieder's law which models luminance attenuation through fog. The proposed method estimates the atmospheric veil through exponential modeling and uses it to restore contrast. Results show the restored images have higher clarity and more visible edges than other methods. The technique allows real-time enhancement of color and grayscale images captured in homogeneous or heterogeneous fog.
Depth of Field Image Segmentation Using Saliency Map and Energy Mapping Techn...ijsrd.com
Image plays a vital role in image processing. In Image processing Depth of Field is to segment the relevant object from an Image. Depth of Field is the space between the near and extreme objects in a scene. The objective of this work is to segment the image using Low Depth of Field .Unsupervised segmentation is used to find low depth of field image. Saliency map and curve evaluation method is created and initialized for the image. Energy map have been employed so as to bring the desired result. Lipschitz function is used to generate the mathematical view of representation. Various Iteration methods have shown the graphical representation of an image. The Segmented results have shown the Object detection in an image.
Aerial photography involves taking photographs of the ground from an elevated perspective using cameras mounted on aircraft or drones. The key aspects of aerial photography include:
- Photos have geometric distortions that can be corrected through photogrammetry to allow for accurate measurements, mapping, and 3D modeling.
- Factors like camera tilt, flight height, and relief displacement of objects must be accounted for.
- Aerial photos have specific scales depending on the flight altitude, and these scales are larger than typical maps, showing greater detail of a smaller area.
- Aerial photos find diverse uses in fields like geology, agriculture, land use planning, and environmental monitoring by providing overhead perspectives not available from ground level.
The document discusses satellite image processing. It begins by defining satellite image processing as a technique to enhance raw images received from cameras or sensors on satellites, space probes, and aircraft. It then discusses satellite imagery in more detail, including spatial, temporal, spectral, radiometric, and view angle resolution. The document also covers remote sensing, methodology for change detection in remote sensing images, image segmentation, satellite image operators, Google Maps, and applications of satellite image processing such as in environmental monitoring and consumer apps.
Satellite Image Processing technique to enhance raw images received from cameras or sensors placed on satellites, space probes and aircrafts or pictures taken in normal day to day life in various applications.
NetVLAD: CNN architecture for weakly supervised place recognitionGeunhee Cho
This document proposes NetVLAD, a CNN architecture for place recognition. It introduces a new trainable VLAD layer called NetVLAD that aggregates convolutional features into a compact vector representation. The CNN is trained end-to-end on weakly labeled Street View imagery using a triplet ranking loss to learn representations robust to viewpoint and lighting changes. Evaluation on benchmark datasets shows NetVLAD trained this way outperforms previous local feature and CNN-based methods for place recognition.
This document provides an overview of image matching techniques. It defines image matching as geometrically aligning two images so corresponding pixels represent the same scene region. Key aspects covered include detecting invariant local features, describing features in a scale and rotation invariant way using SIFT, and matching features between images. SIFT is highlighted as an extraordinarily robust technique that can handle various geometric and illumination changes. Feature matching is used in many computer vision applications such as image alignment, 3D reconstruction, and object recognition.
Depth Fusion from RGB and Depth Sensors IIYu Huang
This document outlines several methods for fusing depth information from RGB and depth sensors. It begins with an outline listing 14 different depth fusion techniques. It then provides more detailed descriptions of several methods:
1. A noise-aware filter is proposed for real-time depth upsampling that takes into account inherent noise in real-time depth data.
2. Integrating LIDAR into stereo disparity computation to reduce false positives and increase density in textureless regions.
3. A probabilistic fusion method combines sparse LIDAR and dense stereo to provide accurate dense depth maps and uncertainty estimates in real-time.
4. A LIDAR-guided approach generates monocular stixels, supporting more efficient
The document discusses the process of creating a digital elevation model (DEM) through photogrammetry, which involves processing overlapping aerial or satellite images into a stereo pair, establishing ground control points, and extracting terrain data to generate an orthorectified DEM and orthophotos through steps like interior and exterior orientation. Key inputs include stereo image pairs, ground control points, and sensor specifications, while the desired outputs are an accurate georeferenced DEM and orthophotos within specified accuracy standards.
This document describes a laser distance measurement system using a webcam. It consists of a laser transmitter and webcam receiver. The laser pulse is reflected off an object and received by the webcam. Software calculates the distance based on the time of flight. The system achieves high accuracy of ±3cm. It calibrates the system using test measurements to determine the relationship between pixel location of the laser dot and actual distance. This allows accurate distance measurements within a few percent of error out to over 2 meters. Potential improvements discussed are using a laser line instead of dot for more data points and a green laser for better visibility.
Stixel based real time object detection for ADAS using surface normalTaeKang Woo
The document discusses using surface normal vectors for real-time object detection in autonomous driving applications. The goals are to:
1. Develop a stixel-based stereo vision module running at 15-30 fps for detecting objects and estimating their 3D positions.
2. Validate hypothesis regions of interest (ROIs) using surface normal vectors to improve precision by 10%.
3. Analyze object geometry features and classify objects using surface normal vectors.
Similar to SfM Research Applications Presentation (20)
This document discusses the concept of significant figures and how to determine the number of significant figures in measurements and calculations. It defines significant figures as the "important digits" that indicate the precision of a measurement. Rules are provided for determining significant figures depending on leading or trailing zeros and whether the number is read from left to right or right to left. Examples demonstrate applying these rules and how to round final answers in calculations like addition, subtraction, multiplication and division based on the least precise measurement used. The key takeaway is that significant figures convey precision and final answers should not be more precise than the least precise input.
This document discusses hypothesis testing. It explains that hypothesis testing is used to determine if data is statistically significant enough to reject or fail to reject the null hypothesis. The key aspects covered are:
- Identifying when hypothesis testing is appropriate
- Distinguishing between the null and alternate hypotheses
- Determining whether to reject or fail to reject the null hypothesis based on comparing a test statistic to a critical value from a distribution table
This document discusses how scientists measure the hydrologic cycle. It describes traditional methods like stream gaging stations, groundwater wells, and SNOTEL stations to monitor streams, groundwater levels, and snowpack. It also discusses newer geodetic methods like GPS and GRACE satellites that can measure subtle changes in gravity or ground movement related to water storage and flow. These comprehensive measurements across different reservoirs help scientists better understand the complex global hydrologic cycle.
The document discusses how the coastline of North America during the Cretaceous Period 80 million years ago, with a Western Interior Seaway dividing the continent, still influences patterns today. It notes that the fertile soil deposited along this ancient coastline attracted slave plantations, and after emancipation the populations remained high in African Americans. As a result, modern voting patterns follow the same curve as the long-gone Cretaceous coastline, with counties with larger African American populations voting predominantly Democrat.
This PowerPoint document provides instructions for an activity to analyze climate and biomes using data on cities from around the world. Students will sort city climate information cards into biome categories, plot locations on a map, and fill out a worksheet characterizing climate and biome for each city. The PowerPoint includes over 50 slides providing detailed climate and location data on cities to support categorizing into biomes.
This document provides instructions for tracking weather systems using maps. Students are asked to print maps showing the location of low pressure centers over time. By examining the date and time stamps, students track one low pressure system as it moves across the United States over several days, recording its location on blank maps. They then connect the locations with a line to show the storm's path. Students also have the option to track additional storms, measure distances traveled between maps to calculate speed, or use software to analyze and animate the map images.
This document provides an overview of traditional and geodetic methods for measuring water resources. It discusses the hydrological cycle and key reservoirs and fluxes. Traditional measurements like gauging stations and SNOTEL stations that measure snowpack are introduced. Geodetic methods using GPS and gravity satellites are presented as newer techniques to measure vertical land motion, snow depth, soil moisture, and groundwater levels. Declining trends in snowpack and streamflow in Montana watersheds are highlighted as impacts of climate change on water resources. Stakeholders in water resources like local residents, industry, and government are identified.
This document defines and compares the three main measures of central tendency: mean, median, and mode. It explains that the mean is calculated by adding all values and dividing by the total number of values, the median is the middle value when the values are arranged in order, and the mode is the most frequently occurring value. The document also notes that outliers can affect the mean more than the median or mode. An example calculation is provided to demonstrate how an outlier impacts each measure. The key takeaway is that the mean, median and mode are important for summarizing large datasets with a single representative value.
Soils are essential to supporting life and human civilization. As populations grow, pressures on soils increase and maintaining soil health is important. Throughout history, human activities like deforestation, overgrazing, and poor irrigation have led to soil degradation problems like erosion, desertification, and salinization. This has negatively impacted societies by reducing agricultural productivity and sometimes causing civilizations to fail. However, more recent initiatives show people rediscovering the importance of soils and taking steps to promote sustainable land use and soil conservation.
The document discusses soil classification systems and soil surveys. It explains that soil taxonomy is a hierarchical system used to classify soils based on observable properties like color, structure, and chemistry. Soils are grouped into increasingly broader categories from the most specific level of series up to the broadest level of order. Soil surveys involve soil scientists mapping and describing soils in a given area in order to group soils with similar properties. The classifications aim to convey information about soil formation and management needs.
The document discusses nutrient management and soil fertility. It outlines key nutrients needed by plants and their analogous benefits for human health, including nitrogen for growth, potassium for water uptake and disease resistance, and calcium for growth and strong bones. It also addresses how soil pH impacts nutrient availability and describes common nutrient deficiencies like zinc deficiency that causes stunted growth and yellowing.
This document discusses several issues that can negatively impact soil quality including disturbed and degraded soil, desertification, deforestation, salinization, run-off, mineral extraction, and wind erosion. These processes can damage soil structure and reduce fertility.
The document discusses the major biomes of the world and the soils typically found within each one. It describes the key biomes as tropical rainforests, temperate forests, boreal forests, grasslands, tundra, deserts, shrublands, and wetlands. Each biome is defined by its climate, vegetation, and characteristic soil orders that form as a result of the particular environmental conditions within that biome.
This document discusses the physical properties and formation of soil. It describes how soil characteristics like color, texture, structure, and horizons/profiles influence water movement, storage, erosion, and plant growth. Soil formation is influenced by climate, organisms, topography, parent material, and time in a process known as CLORPT. The physical properties of soil determine how quickly water can infiltrate and percolate through different soil types.
This document discusses various natural and human-caused processes that can degrade soils, as well as best management practices to mitigate soil degradation. It covers topics like erosion from water and wind, desertification, acidification, salinization, effects of deforestation, urbanization, construction projects, land application of manures and wastes, and mining reclamation. Sustainable land management and soil conservation techniques aim to renew resources rather than deplete them over time through practices like maintaining vegetative cover, controlling grazing intensity, and properly applying nutrients from wastes.
This chapter discusses the living components of soil, including bacteria, fungi, protists, and fauna. Bacteria and fungi play important roles in nutrient cycling and forming soil structure. Fungi exist as filaments called hyphae that can form partnerships with plant roots. Protists include amoebas, ciliates, and flagellates that consume bacteria and debris. Larger soil fauna include earthworms, nematodes, springtails, and arthropods that further break down organic matter and improve soil structure through bioturbation. The variety of organisms in soil work together to create a living system that supports plant growth.
This document discusses the 2012-2017 California drought and its impacts. It provides historical context on droughts in California and examines precipitation data. Specific topics covered include:
1. The spatial extent and timing of the 2012-2017 drought across California and how it compares to historical droughts.
2. How precipitation was measured using tools like snow pillows and GPS reflection to track snow levels.
3. The societal impacts of the drought, including mandatory water rationing and transformations to California's landscape and economy.
This document discusses using GPS vertical positioning to monitor groundwater storage changes. It begins by explaining that groundwater mining is a global problem, and that extracting groundwater causes the land surface to rise as the total water storage decreases. It then discusses how GPS networks can detect these vertical position changes at the sub-centimeter level on a daily basis, allowing monitoring of seasonal water changes. Finally, it notes that long-term groundwater pumping can lead to both reversible and irreversible subsidence exceeding several meters, and provides examples from California's Central Valley.
This document discusses methods for characterizing groundwater storage, including traditional well measurements and satellite-based GRACE observations. It defines terrestrial water storage as all water on the land surface, and explains that groundwater often dominates variations in storage. Wells measure groundwater levels, with changes indicating replenishment or depletion over time. GRACE satellites detect changes in mass distribution and associated gravity field variations to infer changes in total water storage, including groundwater, at coarse spatial scales. The document provides examples of using both approaches to monitor groundwater in key aquifers.
The document provides an introduction to GPS/GNSS basics, including:
- GPS uses 24-32 satellites in medium Earth orbit that transmit positioning and timing data. Receivers need signals from 4 satellites to calculate a 3D location.
- Ground control stations monitor the satellites and send updates to synchronize their atomic clocks and orbital data.
- GPS determines location by calculating distances to satellites using signal transmission times and triangulating the receiver's position.
- Precise GPS uses permanent stations with stable monuments to collect data over many years, achieving sub-centimeter positioning and millimeter-per-year velocity estimates.
3. WHAT DO YOU CREATE?
~500 points/m2 colored point cloud along a ~1 km section of the 2010 El
Mayor-Cucapah earthquake rupture generated from ~500 photographs captured in 2 hours from
a helium blimp
Figure Ed Nissen
4. WHAT CAN WE COMPARE SFM TO?
Figure Johnson 2014
5. MATCHING FEATURES
Step 1
Match corresponding features and measure
distances between them on the camera image plane d,
d’
The Scale-Invariant Feature Transform is key to
matching corresponding features despite varying
distances
d
d'
Figure Ed Nissen
7. SCALE-INVARIANT FEATURE TRANSFORM (SIFT)
• Finds matching features in multiple photographs
• Scale, perspective, and illumination of the feature in the photograph do not
affect algorithm
• Used as the input for calculating camera locations
Figure Ed Nissen
8. FIND CAMERA LOCATIONS
d
d'f’
f
Step 2
When we have the matching locations of multiple
points on two or more photos, there is usually just one
mathematical solution for where the photos were
taken.
Therefore, we can calculate individual camera
positions (x, y, z), (x’, y’, z’), orientations i, i’, focal
lengths f, f’, and relative positions of corresponding
features b, h, in a single step known as “bundle
adjustment.”
(x, y, z)
(x’, y’, z’)
i
i’
h
b
Figure Ed Nissen
9. MULTI-VIEW STEREO
d
d'f’
f
Step 3
Next, a dense point cloud and 3D surface are
determined using the known camera parameters and
the sparse point cloud as input.
All pixels in all images are used so the dense model is
similar in resolution to the raw photographs (typically
100s–1000s point/m2). This step is called “multi-view
stereo matching” (MVS).
(x, y, z)
(x’, y’, z’)
i
i’
h
b
Figure Ed Nissen
10. GEORECTIFICATION
d
d'f’
f (x, y, z)
(x’, y’, z’)
i
i’
h
b
Step 4
Georectification means converting the point cloud
from an internal, arbitrary coordinate system into a
geographical coordinate system. This can be achieved
in one of two ways:
Figure Ed Nissen
11. GEORECTIFICATION
d
d'f’
f (x, y, z)
(x’, y’, z’)
i
i’
h
b
Step 4
Georectification means converting the point cloud
from an internal, arbitrary coordinate system into a
geographical coordinate system. This can be achieved
in one of two ways:
• directly, with knowledge of the camera
positions and focal lengths
• indirectly, by incorporating a few ground control
points (GCPs) with known coordinates. Typically
these would be surveyed using differential GPS.
GCPs surveyed
with roving receiver
GPS base station
Figure Ed Nissen
12. PRODUCTS
d
d'f’
f (x, y, z)
(x’, y’, z’)
i
i’
h
b
GCPs surveyed
with roving receiver
GPS base station
Optional Step 5
Generate derivative products:
• Digital Surface Model
• Orthomosaic for texture mapping
Figure Ed Nissen
Structure-from-Motion photogrammetry is an emerging technique used to create three-dimensional point clouds with associated color—basically a three-dimensional model of the area of interest. The key things to notice here are that the camera needs to continually be moving (no two photos taken from the same location) and the photographs need to include the same features, so that features are in multiple photographs. You can see here that many of the gray dots on the 3D model are captured by multiple images, not just one. Scientists use this for many applications, which you will learn about today. The next slide will show what Structure-from-Motion actually produces.
The model above is a three-dimensional point cloud with associated color. This model is of a fault scarp; the red arrows point toward the location of the scarp. You will see a different view of this figure later in the presentation. Most researchers who use SfM use it to make point clouds like these: 3D models of a field area they can revisit and actually measure in the lab instead of the field. Can you think of anything to model using SfM? [Make a list on the board.] Why would SfM be a good method to use for these applications? [add to list on the board.]
Structure-from-Motion is not the only way to create a georeferenced point cloud like the one on the previous slide. The other option is using something called LiDAR. LiDAR laser scans an area of interest to create a 3D point cloud. Two example platforms for LiDAR are shown. You can fly a plane over an area at an altitude of roughly 1 km to get a point cloud or have a scanner on a tripod as shown in B. LiDAR can be used with many other platforms not seen here. Typically, LiDAR is how scientists are able to collect 3D point clouds for research. We will see some direct comparisons between LiDAR and SfM in later slides. (Kendra Johnson, Geosphere 2014)
Note: GSA Publications allow the reproduction of a single figure or table under without permissions. http://www.geosociety.org/pubs/copyrt.htm
The first step in actually creating the product we saw on the previous slide is to find the same feature in multiple photographs. You can see [spacebar] the red and purple lines tracing the line of sight from the two camera locations to two different features on the ground. This is done using an algorithm called SIFT: Scale-Invariant Feature Transform.
Each red box shows the same feature in multiple photographs. The SIFT algorithm recognizes these as the same feature.
[Read the slide.]
A primary pioneer for the SfM method is: Lowe, D.G., 1999. Object Recognition from Local Scale-invariant Features. International
Conference on Computer Vision, Corfu, Greece, pp. 1150–1157.
When we have the matching locations of multiple points on two or more photos, there is usually just one mathematical solution for where the photos were taken. [spacebar] This step, bundle adjustment, results in the locations of the cameras and a sparse point cloud—like the one we saw on the earlier slide, but with significantly fewer points.
Multi-view stereo is a process that results in the creation of a dense point cloud, like the example on the earlier slide. The MVS algorithm takes the sparse point cloud and camera locations to populate the model with more points. The resultant point cloud may have millions of points.
Georectification means converting the point cloud from an internal, arbitrary coordinate system into a geographical coordinate system. This step is essential; we want to be able to link the model to where it actually is in space. This results in the ability to measure features in the model, because it will be correctly scaled. [spacebar]
The process of georectification can be achieved in two ways: directly or indirectly. Generally, the indirect process is what scientists use. This way, the model is not only correctly scaled but also correctly located geographically. The number of ground control points varies project to project; however, a minimum of three is needed. It is recommended to use more than 10 per project. Find points that are clearly recognizable or use targets (usually the recommended option).
The last step in the production of an SfM model is to create the final products. Generally, people use the topography (digital surface model) or the imagery (orthomosaic). The next slides show some examples.
This photo density map shows the amount of overlap between the photos in a dataset. Each dot is a camera location. You can see the photos are the most dense around areas with lots of camera locations like the center, but that in this model every spot was seen by at least two photographs.
The orthomosaic is a high-resolution photograph of the area created by associating color with each point in the model. This can be used to view the features on the surface.
DEM – Digital elevation model. This one has hillshade, so it seems like the sun is shining and it is partially in shadow to show off the features. You can see that different features stand out in the DEM in comparison to the orthomosaic, so they can be used for different research applications. You can use this to measure features.
Many platforms exist to assist in capturing SfM data. The figure above shows a range of platforms, from UAS on the far left, handheld mid-left to pole photography on the mid-right and a balloon on the far right. The squares with a black pattern are targets, places to survey in, so the model can be linked to geographic coordinates. A handheld platform is best for doing detail work and some outcrop scale work, but has limited applications and is not ideal for areas larger than 100–200 square meters. The slides following will go into the details of the other platforms.
One form of ground-based SfM is pole photography. In the photo shown, the pole is used to take photographs of the horizontal outcrop. For scale, the man holding the pole is 1.85 m tall. Poles are useful because they are inexpensive, good for photographing outcrops, easy to use, and lightweight. They can be quite high—although poles higher than around 20 feet generally need a tripod for stability. However, poles are inefficient in comparison to using an aerial system and require a mount for the camera, as most poles are intended for other applications.
Kites are used frequently as a platform. This example is from Amara West, an archaeological site. Kites are inexpensive, but reliant on wind. Kites also require a picavet (shown in the left photo). Kites allow photographs from a useful height for photographing topography, require no helium (unlike balloons), and have no legal complications (unlike UASs) because they use a tether.
This is just an example of one type of balloon. This one is called a heli kite because it has a kite-like tail but is filled with helium. The heli kite carries small cameras only (other balloons may carry larger cameras). Balloons and kites having similar advantages/disadvantages, as well as similar applications. The main difference is balloons have no weather requirements, but do need helium (~$180 per canister). Figure Kate Shervais, photographs taken by UNAVCO
UASs are the final category of SfM platform. The cost is highly variable depending on type—one can spend thousands of dollars. The UAS shown in this photograph was around 1000 dollars. The height, camera position, and flightlines are easily controlled using an UAS. However, they require a skilled operator, batteries may limit the survey time (to ten minutes or less), and may require a lighter camera than a balloon. The legal landscape is also quite complex and frequently changing, so consult legal counsel before using.
This video shows the view of a UAS during flight in the Pofadder shear zone between South Africa and Namibia. At the end of the video, you can see the operator on the left side of the frame wearing a baseball cap. This flight was one of many used to create a model of the outcrop.
The blue rectangles show the camera locations for the photos used for the model. The photos line up and show the flight paths taken [spacebar]. As you can see, many flight paths were combined to create the model of the shear zone, not just the one shown in the video.
The first example of a ground-based SfM model is this basalt sample. Photographs were taken at each of the blue rectangles (indicating the calculated camera location).
When the camera locations are removed, you can see the sample a bit better. The black patterns on the cloth underneath the sample are used to scale the image. This model is used for students; they can look at hand samples in a digital form in addition to the hand sample in the classroom.
This model was created to show an outcrop at Beavertail State Park in Rhode Island. The model has been used by students to map the different small-scale folds present in the outcrop. You can see in this side view that the edges of the model have been blurred; the blurring effect is due to low photo density at the edges of the model.
Here is the top view of the Beavertail State Park outcrop model.
In this view, you can see the 3D topography of the area shown in the flight video.
The orthomosaic looks like this. This orthomosaic was used as the base for a geologic map.
The map characterizes the shear zone based on rock type and orientation. Like a typical geologic map, the different colors correspond to different rock types. The white lines correspond to faults or fold axes.
This example of a paleoseismic trench is in Alpine, Utah, and is on the Wasatch Fault. Classically, trenches are dug to show the Quaternary deformation history of a fault. The trench crosses a fault scarp, like the ones shown on earlier slides. The walls of the trench have layers of material displaced by the fault that can be dated to find the age of the slip event. In addition, the layers are mapped. These maps used to take tens of hours to generate, but using SfM, they are relatively efficient to create. However, this model shows one issue – the areas that lack enough photographs result in black areas with no data as circled here. [spacebar]
[spacebar] The photo in the top right shows a heli kite; this is not used for trench data collection but is a cool platform!
The Oquirrh fault scarp is located in western Utah. This fault is a normal fault, like many of the faults in Utah (Basin and Range extension). This survey was conducted using a UAS: a drone. You can clearly see the fault scarp [spacebar] as the break in slope on the hill.
Aerial photographs have been used in the geosciences as a base for mapping and to see structures from another viewpoint. These aerial photos can also be used for SfM. This model was made from only five photographs, but the topography is very clear. These models can then be used as a base map for planning geologic research.
This slide also shows images of a fault scarp. The Landers fault scarp, located in Southern California, resulted from a M7.3 earthquake in 1993. The image above shows examples of the SfM DEM (top) and a DEM generated using TLS (terrestrial laser scanning). TLS requires expensive equipment (hundreds of thousands of dollars) and a higher level of expertise to conduct a survey. As you can see, the point clouds are quite comparable. The TLS point cloud has a lower point density away from the fault scarp.
This view is of the Granby landslide. The landslide [spacebar] circled here in red. This landslide is a slow-velocity landslide frequently monitored by researchers as it is still currently moving, from 2007 to the present. It is important to study this, as workers would like to slow the landslide until it is no longer a hazard. A different view of the landslide is here. [spacebar]
The edge of the landslide can be seen here. [spacebar]
This landslide is in Utah, on the Wasatch front. Landslide activity in this area (around Salt Lake City) is linked to activity on the Wasatch Fault.
Another application of SfM is looking at geomorphologic change. In 2013, a major flood occurred in Boulder, Colorado. Prior to the flood, an ALS survey was conducted of the Four Mile Creek area in North Boulder. After the flood, SfM and TLS surveys were conducted. “Change detection” is when you take a dataset from two different times and subtract them—think of it as simple math. At the point on both surfaces with the same GPS location, what’s the difference in elevation? This uses the SfM model and the ALS survey. The color corresponds to how much the elevation has changed in that place. On the next slide we will see a very similar figure produced using TLS and ALS.
In this figure, we have the same view as the previous slide. The color ramp is the opposite of the previous slide (so the blue on the earlier slide is orange/red here). Both figures show the change in the channel. TLS and SfM can make very similar products.
This survey was conducted using a UAS. The Salton Sea is located in Southern California near the Mexico border and was created when combination of bad irrigation practices and historic flooding of the Colorado River led to the Salton Sink—one of the lowest elevations in California—filling with water. This part of Southern California is a desert, so the shoreline has fluctuated with time. This survey was done to characterize the geologic evidence for the history of the Salton Sea shoreline. The ground control points used to georeference the survey are shown in the red circles [spacebar]. The oldest shoreline of the Salton Sea is shown by the red dotted line. Two drainages are here [spacebar]. Finally, you can see the tarp used to land the UAS here [spacebar].
This is the DEM created from the model shown on the previous slide. In the DEM, some features appear that were difficult to see with the orthomosaic. For example, you can see mudcracks [spacebar], as well as footprints in the sand [spacebar] and strandlines (lines created by stranded fish carcasses that show the previous furthest extent of the water). [spacebar].
Another application of SfM is archaeology. This example is from the British Museum’s Egypt department. They primarily use kite-based photogrammetry. The setup is shown in the left photo and a photograph from the camera attached to the kite on the right.
Here is a portion of a model created from the kite photography.