Drone-based monitoring and 3D modeling of the National Arboretum in Canberra is allowing for detailed phenotyping of tree growth over time. Drones equipped with RGB and multispectral cameras capture aerial images that are processed using software like Pix4D to generate orthomosaic images, digital elevation models, 3D point clouds, and tree metrics like height and area. The data is helping researchers monitor changes in the young research forest over several years. Advanced visualization tools are being developed to better explore the large, complex datasets.
Advantages and disadvantages of Remote SensingEr Abhi Vashi
This document discusses the advantages and disadvantages of remote sensing. Some key advantages include large area coverage allowing regional surveys, repetitive coverage enabling monitoring of dynamic themes, and data being acquired at different scales and resolutions. Disadvantages include remote sensing being expensive for small areas, requiring specialized training, and human errors potentially being introduced. The document provides 15 advantages and 9 disadvantages of remote sensing in detail.
Elevation mapping using stereo vision enabled heterogeneous multi-agent robot...Aritra Sarkar
This document summarizes a student project on elevation mapping using stereo vision via a heterogeneous multi-agent cloud network. The project aims to develop a swarm of rovers that can collaboratively build 3D elevation maps of terrain using stereo cameras and communicate over a cloud network. Key aspects covered include the motivation, related work on swarm robotics, space robotics, cloud computing and vision sensors. The document outlines the basic system design, program flow, hardware, software, algorithms for stereo vision, odometry and map representation. Results are presented from testing the system on KITTI and IIST datasets.
The presentation was compiled through online sources available. It was discussed during a lecture held on 16-02-2017 at the B. M. C. E. T. Surat for the BE II Civil Engineering Students. The focus of discussion was to create a sensitization about Remote Sensing and Geographical Information System among the students.
Global illumination for PIXAR movie productionJaehyun Jang
1) PIXAR transitioned from point-based global illumination techniques in 2010 to physically-based ray tracing for rendering starting in 2013 with Monster University due to increases in processing power.
2) Physically-based rendering solves the rendering equation using separate integrators for direct lighting, indirect diffuse lighting, and indirect specular lighting working together based on the scene's BRDFs and light sources.
3) PIXAR continues research in sampling techniques like importance sampling and stratified sampling to improve the performance and quality of their physically-based Monte Carlo ray tracing pipeline used for production rendering.
Remote Telepresence for Exploring Virtual WorldsLarry Smarr
The document describes the history and development of remote telepresence and virtual reality technologies over several decades. It outlines key projects and innovations including the NSFnet which connected supercomputers in the 1980s, the development of the CAVE virtual reality system in the early 1990s, and more advanced optical network projects like OptIPuter in the 2000s which enabled high-resolution telepresence and collaboration across global research centers.
Advantages and disadvantages of Remote SensingEr Abhi Vashi
This document discusses the advantages and disadvantages of remote sensing. Some key advantages include large area coverage allowing regional surveys, repetitive coverage enabling monitoring of dynamic themes, and data being acquired at different scales and resolutions. Disadvantages include remote sensing being expensive for small areas, requiring specialized training, and human errors potentially being introduced. The document provides 15 advantages and 9 disadvantages of remote sensing in detail.
Elevation mapping using stereo vision enabled heterogeneous multi-agent robot...Aritra Sarkar
This document summarizes a student project on elevation mapping using stereo vision via a heterogeneous multi-agent cloud network. The project aims to develop a swarm of rovers that can collaboratively build 3D elevation maps of terrain using stereo cameras and communicate over a cloud network. Key aspects covered include the motivation, related work on swarm robotics, space robotics, cloud computing and vision sensors. The document outlines the basic system design, program flow, hardware, software, algorithms for stereo vision, odometry and map representation. Results are presented from testing the system on KITTI and IIST datasets.
The presentation was compiled through online sources available. It was discussed during a lecture held on 16-02-2017 at the B. M. C. E. T. Surat for the BE II Civil Engineering Students. The focus of discussion was to create a sensitization about Remote Sensing and Geographical Information System among the students.
Global illumination for PIXAR movie productionJaehyun Jang
1) PIXAR transitioned from point-based global illumination techniques in 2010 to physically-based ray tracing for rendering starting in 2013 with Monster University due to increases in processing power.
2) Physically-based rendering solves the rendering equation using separate integrators for direct lighting, indirect diffuse lighting, and indirect specular lighting working together based on the scene's BRDFs and light sources.
3) PIXAR continues research in sampling techniques like importance sampling and stratified sampling to improve the performance and quality of their physically-based Monte Carlo ray tracing pipeline used for production rendering.
Remote Telepresence for Exploring Virtual WorldsLarry Smarr
The document describes the history and development of remote telepresence and virtual reality technologies over several decades. It outlines key projects and innovations including the NSFnet which connected supercomputers in the 1980s, the development of the CAVE virtual reality system in the early 1990s, and more advanced optical network projects like OptIPuter in the 2000s which enabled high-resolution telepresence and collaboration across global research centers.
2016 asprs track: overview and user perspective of usgs 3 dep lidar by john ...GIS in the Rockies
This document provides an overview of 3D Elevation Program (3DEP) lidar data. It defines key lidar terminology and describes the various products that can be derived from lidar point clouds, including digital surface models (DSMs), digital terrain models (DTMs), intensity images, and canopy height models. It explains the quality levels and availability of 3DEP lidar data across the United States. The document also briefly discusses the latest lidar technologies, such as multispectral, single-photon, and Geiger-mode lidar.
Satellite image processing is a technique to enhance raw images received from cameras or sensors placed on satellites, space probes and aircrafts or pictures taken in normal day to day life in various applications. The process of creating thematic maps as spatial distribution of particular information. These are structured by Spectral Bands. These have constant density and when they overlap their densities get added. It performs image analysis on multiple scale images and catches the comprehensive information of system for different application. Examples of themes are soil, vegetation, water-depth and air. The supervising of such critical events requires a huge volume of surveillance data and extremely powerful real time processing for infrastructure
2017 ASPRS-RMR Big Data Track: Practical Considerations and Uses of USGS 3DEP...GIS in the Rockies
This document provides an overview of lidar technology and 3DEP lidar data products. It begins with a brief introduction to lidar terminology and data types. It then describes the key 3DEP lidar deliverables, including classified point clouds, DEMs, and metadata. The document outlines how to access and download current 3DEP lidar coverage and products from the National Map website. It also introduces some emerging single-photon and Geiger-mode lidar technologies that could impact future 3DEP data collection.
Landmines cause enormous humanitarian and economic problems around the world. This document proposes using image processing techniques to detect landmines. It discusses using filters like median and weighted median filters to reduce noise in images captured by sensor devices. A thresholding method is also introduced to convert images to binary to aid in landmine detection. The goal is to develop a safe and efficient landmine detection system using these image processing methods.
WE2.L10.1: LANDSAT DATA PRODUCTS, FREE AND CLEARgrssieee
This document discusses the Landsat program and free distribution of Landsat data. It provides details on the large Landsat data archive containing over 2 million scenes. In 2008, the USGS announced that any Landsat scene selected would be processed and distributed free of charge. This led to a large increase in data distribution, from a previous maximum of 20,000 scenes per year to over 1 million free scenes distributed in the first year. The upcoming Landsat Data Continuity Mission launching in 2012 will continue free global data collection and distribution.
The document describes the University of Surrey's proposal for the Selene lunar rover for a lunar robotics challenge. The proposal involves converting a Pioneer 3-AT mobile robot into a tracked lunar rover with a deployable relay nanorover to assist with communications and localization. Key aspects of the rover design include the tracked mobility system, stereoscopic vision for navigation, and a robotic arm for sample collection.
This document discusses remote sensing and geographical information systems in civil engineering. It covers various topics related to remote sensing sensors including optical sensors, thermal scanners, multispectral sensors, passive and active sensors, scanning and non-scanning sensors, imaging and non-imaging sensors, and the different types of resolutions including spatial, spectral, radiometric, and temporal resolution. It provides examples and illustrations of these concepts.
The document discusses strategies for optimizing data collection for the proposed GEO-CAPE mission through intelligent observation studies conducted by NASA's Goddard Space Flight Center team. Key findings include developing an observation operations simulator to examine different instrument and scheduling options, identifying cloud detection algorithms including the value of shortwave infrared bands, and establishing the feasibility of approaches like onboard cloud detection to reduce data handling costs. The simulator could be used to test different "what if" scenarios incorporating actual cloud forecast data and characterize instruments based on scene footprints. Possible future work involves integrating a scheduler with the simulator for live simulations, updating the tool with user feedback, and further analyzing instruments and cloud detection capabilities.
Recent observed environmental changes as well as projections in the fourth assessment report of the Intergovernmental Panel on Climate Change shed light on likely dramatic consequences of a changing mountain cryosphere following climate change. Some very destructive geological processes are triggered or intensified, influencing the stability of slopes and possibly inducing landslides. Unfortunately, the interaction between these complex processes is poorly understood. This project addresses the key issues in response to such changing conditons: monitoring and warning systems for the spatial and temporal detection of newly forming hazards, as well as extending the quantitative understanding of these changing natural systems and our predictive capabilities.
This thesis proposal aims to develop a system called Eureka to efficiently discover training data for visual machine learning tasks. Eureka combines early discard filters, just-in-time machine learning, and the ability to create more accurate filters without writing new code. The goal is to reduce the manual effort required of domain experts to find and label rare phenomena in large unlabeled visual datasets. The proposal outlines research thrusts to apply Eureka in different computing environments like edge, cloud, and smart storage, as well as different problem domains including images, videos, and other multidimensional data. Initial experiments show Eureka can discover more true positives per unit time compared to naive hand-labeling.
Summary of current radiometric calibration coefficients for Landsat MSS, TM, ETM+,
and EO-1 ALI sensors
Gyanesh Chander a,⁎, Brian L. Markham b, Dennis L. Helder c
a SGT, Inc. 1 contractor to the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center, Sioux Falls, SD 57198-0001, USA
b National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), Greenbelt, MD 20771, USA
c South Dakota State University (SDSU), Brookings, SD 57007, USA
Chronological Calibration Methods for Landsat Satellite Images iosrjce
This document describes methods for chronologically calibrating Landsat satellite images to account for differences when images are taken days apart. It discusses correcting ETM+ images for scan line failures and converting digital numbers to reflectance. Two methods are proposed to remove phenological effects between Landsat 7 and 8 images taken 8 days apart: linear regression and cross-correlation. Image classification using the visible red and near infrared bands is used to validate the correction methods by comparing land cover detection in study area images.
TraitCapture: NextGen Monitoring and Visualization from seed to ecosystemTimeScience
This document discusses next generation software and hardware for plant phenotyping and ecosystem monitoring. It outlines challenges such as processing and managing large datasets, and optimizing open data sharing. Emerging tools discussed for high resolution field phenotyping include gigapixel imaging, drones, LiDAR, virtual/augmented reality, and sensor networks. A case study is presented of a sensor array installed at the Australian National Arboretum to monitor the environment, tree growth phenotypes, and genotypes over time at high precision across the landscape. The goal is to address fundamental ecological questions by capturing data at finer spatial and temporal resolutions than previously possible.
From gigapixel timelapse cameras to unmanned aerial vehicles to smartphones: ...TimeScience
This document discusses emerging remote sensing technologies that can help address limitations in ecosystem monitoring. It describes technologies like phenocam networks, gigapixel timelapse cameras, unmanned aerial vehicles, kite and balloon photography, and harnessing citizen science photos. These near remote sensing tools can capture data at intermediate scales between satellite imagery and ground-based sampling to improve spatial and temporal resolution. They have the potential to exponentially increase rates of data collection and enable new types of collaborative, data-driven ecosystem science.
This document provides information about the Environmental Remote Sensing course GEOG 2021. It introduces the structure and content of the course, including lectures, practical sessions, assessment, and reading materials. The course is split into two halves, with the first introducing remote sensing concepts and the second focusing on a practical example. Lectures are on Mondays and practical sessions on Thursdays. Assessment consists of an exam and a coursework write-up. Relevant reading materials and online resources are also listed.
Roelof Pieters (Overstory) – Tackling Forest Fires and Deforestation with Sat...Codiax
This document provides an overview of Overstory, a company that uses satellite data and AI to monitor forests and tackle issues like deforestation and wildfires. It discusses how Overstory uses machine learning on high-resolution satellite imagery to create segmentation maps and monitor changes in forests over time. It also describes Overstory's infrastructure including its use of JupyterHub, Dask, and Papermill to enable large-scale distributed processing of satellite data and training of deep learning models.
Running a GPU burst for Multi-Messenger Astrophysics with IceCube across all ...Igor Sfiligoi
- IceCube is a neutrino observatory that detects high-energy neutrinos from astrophysical sources to study violent cosmic events. It uses over 5000 optical sensors buried in Antarctic ice to detect neutrinos.
- A cloud burst was performed using over 50,000 GPUs across multiple cloud providers worldwide to simulate photon propagation through ice for IceCube data analysis. This was the largest cloud simulation ever and demonstrated the ability to burst at exascale scales.
- The simulation helped improve IceCube's neutrino detection and pointing resolution to identify the first known source of high-energy neutrinos, a blazar, demonstrating IceCube's potential for multi-messenger astrophysics.
Running a GPU burst for Multi-Messenger Astrophysics with IceCube across all ...Frank Wuerthwein
- The document describes running a GPU burst simulation for IceCube astrophysics research across 50,000 NVIDIA GPUs in multiple cloud platforms globally, achieving 350 petaflops for 2 hours.
- IceCube detects high-energy neutrinos to study violent astrophysical events by observing the interactions of neutrinos within a cubic kilometer of Antarctic ice instrumented with sensors.
- The GPU burst simulation campaign helped improve IceCube's ability to reconstruct neutrino direction and energy and identify astrophysical sources through multi-messenger astrophysics.
2015-08-13 ESA: NextGen tools for scaling from seeds to traits to ecosystemsTimeScience
This document discusses using new technologies to monitor ecosystems and plants at high resolution over time. It proposes collecting detailed data on individual plants and trees in fields and forests through methods like:
- Automated imaging of plant growth in controlled lab conditions
- Sensor networks and remote sensing to generate 3D models of field sites from aerial drones and ground-based laser scanning
- Genotyping every plant to correlate phenotypes with genetics
The goal is to generate massive, multilayer datasets that track environmental and genetic factors over time at plant and ecosystem scales, analogous to advances in high-throughput genomics and phenomics. This would transform ecological understanding and address global challenges around food security and climate change.
VISION / AMBITION
-Australia the first drone-sensed nation (cm-scale)
-Pre-competitive data release for industry, environmental management, education & research
-Conventional survey & remote sensing techniques at ultra-high resolution and flexibility (time-series, rapid response etc)
-Next gen “UNDERCOVER” techniques (minerals and water resources)
2016 asprs track: overview and user perspective of usgs 3 dep lidar by john ...GIS in the Rockies
This document provides an overview of 3D Elevation Program (3DEP) lidar data. It defines key lidar terminology and describes the various products that can be derived from lidar point clouds, including digital surface models (DSMs), digital terrain models (DTMs), intensity images, and canopy height models. It explains the quality levels and availability of 3DEP lidar data across the United States. The document also briefly discusses the latest lidar technologies, such as multispectral, single-photon, and Geiger-mode lidar.
Satellite image processing is a technique to enhance raw images received from cameras or sensors placed on satellites, space probes and aircrafts or pictures taken in normal day to day life in various applications. The process of creating thematic maps as spatial distribution of particular information. These are structured by Spectral Bands. These have constant density and when they overlap their densities get added. It performs image analysis on multiple scale images and catches the comprehensive information of system for different application. Examples of themes are soil, vegetation, water-depth and air. The supervising of such critical events requires a huge volume of surveillance data and extremely powerful real time processing for infrastructure
2017 ASPRS-RMR Big Data Track: Practical Considerations and Uses of USGS 3DEP...GIS in the Rockies
This document provides an overview of lidar technology and 3DEP lidar data products. It begins with a brief introduction to lidar terminology and data types. It then describes the key 3DEP lidar deliverables, including classified point clouds, DEMs, and metadata. The document outlines how to access and download current 3DEP lidar coverage and products from the National Map website. It also introduces some emerging single-photon and Geiger-mode lidar technologies that could impact future 3DEP data collection.
Landmines cause enormous humanitarian and economic problems around the world. This document proposes using image processing techniques to detect landmines. It discusses using filters like median and weighted median filters to reduce noise in images captured by sensor devices. A thresholding method is also introduced to convert images to binary to aid in landmine detection. The goal is to develop a safe and efficient landmine detection system using these image processing methods.
WE2.L10.1: LANDSAT DATA PRODUCTS, FREE AND CLEARgrssieee
This document discusses the Landsat program and free distribution of Landsat data. It provides details on the large Landsat data archive containing over 2 million scenes. In 2008, the USGS announced that any Landsat scene selected would be processed and distributed free of charge. This led to a large increase in data distribution, from a previous maximum of 20,000 scenes per year to over 1 million free scenes distributed in the first year. The upcoming Landsat Data Continuity Mission launching in 2012 will continue free global data collection and distribution.
The document describes the University of Surrey's proposal for the Selene lunar rover for a lunar robotics challenge. The proposal involves converting a Pioneer 3-AT mobile robot into a tracked lunar rover with a deployable relay nanorover to assist with communications and localization. Key aspects of the rover design include the tracked mobility system, stereoscopic vision for navigation, and a robotic arm for sample collection.
This document discusses remote sensing and geographical information systems in civil engineering. It covers various topics related to remote sensing sensors including optical sensors, thermal scanners, multispectral sensors, passive and active sensors, scanning and non-scanning sensors, imaging and non-imaging sensors, and the different types of resolutions including spatial, spectral, radiometric, and temporal resolution. It provides examples and illustrations of these concepts.
The document discusses strategies for optimizing data collection for the proposed GEO-CAPE mission through intelligent observation studies conducted by NASA's Goddard Space Flight Center team. Key findings include developing an observation operations simulator to examine different instrument and scheduling options, identifying cloud detection algorithms including the value of shortwave infrared bands, and establishing the feasibility of approaches like onboard cloud detection to reduce data handling costs. The simulator could be used to test different "what if" scenarios incorporating actual cloud forecast data and characterize instruments based on scene footprints. Possible future work involves integrating a scheduler with the simulator for live simulations, updating the tool with user feedback, and further analyzing instruments and cloud detection capabilities.
Recent observed environmental changes as well as projections in the fourth assessment report of the Intergovernmental Panel on Climate Change shed light on likely dramatic consequences of a changing mountain cryosphere following climate change. Some very destructive geological processes are triggered or intensified, influencing the stability of slopes and possibly inducing landslides. Unfortunately, the interaction between these complex processes is poorly understood. This project addresses the key issues in response to such changing conditons: monitoring and warning systems for the spatial and temporal detection of newly forming hazards, as well as extending the quantitative understanding of these changing natural systems and our predictive capabilities.
This thesis proposal aims to develop a system called Eureka to efficiently discover training data for visual machine learning tasks. Eureka combines early discard filters, just-in-time machine learning, and the ability to create more accurate filters without writing new code. The goal is to reduce the manual effort required of domain experts to find and label rare phenomena in large unlabeled visual datasets. The proposal outlines research thrusts to apply Eureka in different computing environments like edge, cloud, and smart storage, as well as different problem domains including images, videos, and other multidimensional data. Initial experiments show Eureka can discover more true positives per unit time compared to naive hand-labeling.
Summary of current radiometric calibration coefficients for Landsat MSS, TM, ETM+,
and EO-1 ALI sensors
Gyanesh Chander a,⁎, Brian L. Markham b, Dennis L. Helder c
a SGT, Inc. 1 contractor to the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center, Sioux Falls, SD 57198-0001, USA
b National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), Greenbelt, MD 20771, USA
c South Dakota State University (SDSU), Brookings, SD 57007, USA
Chronological Calibration Methods for Landsat Satellite Images iosrjce
This document describes methods for chronologically calibrating Landsat satellite images to account for differences when images are taken days apart. It discusses correcting ETM+ images for scan line failures and converting digital numbers to reflectance. Two methods are proposed to remove phenological effects between Landsat 7 and 8 images taken 8 days apart: linear regression and cross-correlation. Image classification using the visible red and near infrared bands is used to validate the correction methods by comparing land cover detection in study area images.
TraitCapture: NextGen Monitoring and Visualization from seed to ecosystemTimeScience
This document discusses next generation software and hardware for plant phenotyping and ecosystem monitoring. It outlines challenges such as processing and managing large datasets, and optimizing open data sharing. Emerging tools discussed for high resolution field phenotyping include gigapixel imaging, drones, LiDAR, virtual/augmented reality, and sensor networks. A case study is presented of a sensor array installed at the Australian National Arboretum to monitor the environment, tree growth phenotypes, and genotypes over time at high precision across the landscape. The goal is to address fundamental ecological questions by capturing data at finer spatial and temporal resolutions than previously possible.
From gigapixel timelapse cameras to unmanned aerial vehicles to smartphones: ...TimeScience
This document discusses emerging remote sensing technologies that can help address limitations in ecosystem monitoring. It describes technologies like phenocam networks, gigapixel timelapse cameras, unmanned aerial vehicles, kite and balloon photography, and harnessing citizen science photos. These near remote sensing tools can capture data at intermediate scales between satellite imagery and ground-based sampling to improve spatial and temporal resolution. They have the potential to exponentially increase rates of data collection and enable new types of collaborative, data-driven ecosystem science.
This document provides information about the Environmental Remote Sensing course GEOG 2021. It introduces the structure and content of the course, including lectures, practical sessions, assessment, and reading materials. The course is split into two halves, with the first introducing remote sensing concepts and the second focusing on a practical example. Lectures are on Mondays and practical sessions on Thursdays. Assessment consists of an exam and a coursework write-up. Relevant reading materials and online resources are also listed.
Roelof Pieters (Overstory) – Tackling Forest Fires and Deforestation with Sat...Codiax
This document provides an overview of Overstory, a company that uses satellite data and AI to monitor forests and tackle issues like deforestation and wildfires. It discusses how Overstory uses machine learning on high-resolution satellite imagery to create segmentation maps and monitor changes in forests over time. It also describes Overstory's infrastructure including its use of JupyterHub, Dask, and Papermill to enable large-scale distributed processing of satellite data and training of deep learning models.
Running a GPU burst for Multi-Messenger Astrophysics with IceCube across all ...Igor Sfiligoi
- IceCube is a neutrino observatory that detects high-energy neutrinos from astrophysical sources to study violent cosmic events. It uses over 5000 optical sensors buried in Antarctic ice to detect neutrinos.
- A cloud burst was performed using over 50,000 GPUs across multiple cloud providers worldwide to simulate photon propagation through ice for IceCube data analysis. This was the largest cloud simulation ever and demonstrated the ability to burst at exascale scales.
- The simulation helped improve IceCube's neutrino detection and pointing resolution to identify the first known source of high-energy neutrinos, a blazar, demonstrating IceCube's potential for multi-messenger astrophysics.
Running a GPU burst for Multi-Messenger Astrophysics with IceCube across all ...Frank Wuerthwein
- The document describes running a GPU burst simulation for IceCube astrophysics research across 50,000 NVIDIA GPUs in multiple cloud platforms globally, achieving 350 petaflops for 2 hours.
- IceCube detects high-energy neutrinos to study violent astrophysical events by observing the interactions of neutrinos within a cubic kilometer of Antarctic ice instrumented with sensors.
- The GPU burst simulation campaign helped improve IceCube's ability to reconstruct neutrino direction and energy and identify astrophysical sources through multi-messenger astrophysics.
2015-08-13 ESA: NextGen tools for scaling from seeds to traits to ecosystemsTimeScience
This document discusses using new technologies to monitor ecosystems and plants at high resolution over time. It proposes collecting detailed data on individual plants and trees in fields and forests through methods like:
- Automated imaging of plant growth in controlled lab conditions
- Sensor networks and remote sensing to generate 3D models of field sites from aerial drones and ground-based laser scanning
- Genotyping every plant to correlate phenotypes with genetics
The goal is to generate massive, multilayer datasets that track environmental and genetic factors over time at plant and ecosystem scales, analogous to advances in high-throughput genomics and phenomics. This would transform ecological understanding and address global challenges around food security and climate change.
VISION / AMBITION
-Australia the first drone-sensed nation (cm-scale)
-Pre-competitive data release for industry, environmental management, education & research
-Conventional survey & remote sensing techniques at ultra-high resolution and flexibility (time-series, rapid response etc)
-Next gen “UNDERCOVER” techniques (minerals and water resources)
This is the keynote talk fkw gave at cloudnet 2020. It covers all three cloudbursts we did. As of early 2021, slides 26ff is still the most detailed documentation of the 3rd cloudburst. This material will be covered in a future conference paper.
Garuda Robotics x DataScience SG Meetup (Sep 2015)Eugene Yan Ziyou
What exactly goes on in the commercial drone/UAV industry in Singapore and globally? Behind the hype of consumer “selfie” drones lies a vast number of interesting commercial applications, where drones become an enabler for enterprises to gain new aerial perspectives of their facilities and estates, to make intelligent decisions incorporating this additional dimension of data.
In this presentation, we will look at one such drones-at-work application to reveal some of the behind-the-scene processes and technologies employed. Specifically, we will dive into the precision agriculture domain and share some of the computer vision problems we face, and take a look at various potential solutions to these challenges.
Gigapixel imaging, ESA Australia, Dec 2012TimeScience
This document discusses using gigapixel resolution time-lapse imaging to monitor phenological changes in plants over large landscapes. A new camera system called Gigavision was created that can capture daily 1.5 billion pixel panoramas over 7 hectares from a single vantage point using solar power. Over two growing seasons, the camera captured over 184,000 images totaling 6 terabytes of data. The high-resolution images allow identifying over 500 individual plants and monitoring their growth and changes over time. Such long-term, high-resolution ecological data collection could improve understanding of ecosystem dynamics.
Rescuing the Honey Bee with Kinetica, NVIDIA, and MicrosoftKinetica
Explore how technologies from Kinetica, NVIDIA, and Microsoft are being used to save the honey bee.
Watch the webinar to learn more https://www.kinetica.com/events/kinetica-nvidia-and-microsoft-join-forces-to-help-rescue-the-honey-bee/
Accelerating Science with Cloud Technologies in the ABoVE Science CloudGlobus
This document summarizes the use of the ABoVE Science Cloud (ASC) to support research for the Arctic-Boreal Vulnerability Experiment (ABoVE). The ASC provides researchers with large datasets, computing resources, and tools to process and analyze remote sensing and model data related to Alaska and northern Canada. Several examples are given of projects using the ASC, including analyzing satellite imagery to map forest structure, tracking surface water changes over time, characterizing fire history, and modeling future forest composition under climate change. The ASC aims to facilitate collaboration by allowing scientists to access common datasets and run computationally-intensive processes in the cloud without having to directly transfer large amounts of data.
"Building and running the cloud GPU vacuum cleaner"Frank Wuerthwein
This talk, describing the "Largest Cloud Simulation in History" (Jensen Huang at SC19), was given at the MAGIC meeting on Dec. 4th 2019. MAGIC stands for "Middleware and Grid Interagency Cooperation", and is a group within NITRD. Current federal agencies that are members of MAGIC include DOC, DOD, DOE, HHS, NASA, and NSF.
Scalable Deep Learning in ExtremeEarth-phiweek19ExtremeEarth
This document summarizes a presentation about scalable deep learning techniques for analyzing Copernicus Earth observation data using the Hopsworks platform. The presentation discusses Hopsworks' end-to-end machine learning pipelines, feature engineering capabilities, distributed deep learning techniques like data parallel training, and applications of these techniques to challenges in classifying satellite imagery like sea ice mapping. Deep learning architectures, preprocessing steps, and distributed training methods are highlighted as areas of ongoing work and improvement for analyzing large volumes of remote sensing data on Hopsworks.
TraitCapture:Open source tools for DIY high throughput Phenomics and NextGen ...TimeScience
This document summarizes TraitCapture, an open-source phenotyping pipeline created by the Borevitz Lab. TraitCapture aims to streamline hardware control, data management, experiment tracking, data visualization, and analysis for high-throughput plant phenotyping. It controls cameras to capture plant images from growth chambers and uses automated image analysis to extract phenotypes from over 150,000 images per day. TraitCapture also manages sensor data and environmental conditions from field sites. The goal is to make high-throughput phenotyping more accessible and efficient for researchers.
1. The document describes the concept for a future generation of intelligent Earth observing satellites that would provide real-time satellite imagery and data to end users.
2. Key elements of the proposed system include a network of low Earth orbit satellites linked to geostationary satellites, with on-board processing and high-speed data transmission to allow direct downlinking of imagery and data to users.
3. The system is designed around user needs, with components like handheld receivers and mobile antennas allowing real-time access to satellite data, as well as user software to process and display the data.
Similar to From pixels to point clouds - Using drones,game engines and virtual reality to model and map the National Arboretum in Canberra (20)
Presentation by Dr Steve McEachern, ADA, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Hugo Leroux and Liming Zhu, CSIRO, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
The document summarizes plans by the Australian Government to establish new legislation and institutions to streamline access to and use of public sector data. Key points include:
- A new Commonwealth Data Sharing and Release Act will be introduced in 2019 to provide consistent rules for sharing data and establish a National Data Commissioner to oversee the system.
- The National Data Commissioner will ensure transparency, accountability, security, and appropriate risk management in data sharing.
- New rules will focus on enabling data to be shared for purposes like research and policy-making, while protecting privacy and building public trust in data use.
- The government will continue consulting stakeholders on the legislation to address concerns and help the public understand the reforms.
Presentation by Prof Chris Rowe, ADNet, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Investigator-initiated clinical trials: a community perspectiveARDC
Presentation by Miranda Cumpston, ACTA, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Dr Merran Smith, PHRN, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
International perspective for sharing publicly funded medical research dataARDC
Presentation by Olivier Salvado, CSIRO, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Prof Lisa Askie, ANZCTR, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Dr Davina Ghersi, NHMRC, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
Presentation by Dr Adrian Burton, ARDC, to the 'Unlocking value from publicly funded Clinical Research Data' workshop, cohosted by ARDC and CSIRO at ANU on 6 March 2019.
FAIR for the future: embracing all things dataARDC
FAIR for the future: embracing all things data - Natasha Simons, Keith Russell and Liz Stokes, presented at Taylor & Francis Scholarly Summits in Sydney 11 Feb 2019 and Melbourne 14 Feb 2019.
How to make your data count webinar, 26 Nov 2018ARDC
This document outlines the Make Data Count (MDC) initiative to standardize and promote the tracking of research data usage metrics. MDC has developed a Code of Practice for data usage logs, built an open hub to aggregate standardized usage data, and implemented tracking and display of usage metrics at their own repositories. They encourage other repositories to follow five simple steps to Make Their Data Count: 1) Read the Code of Practice, 2) Process usage logs, 3) Send logs to the hub, 4) Pull usage metrics from the hub, and 5) Display metrics. Future work includes outreach, iteration on implementations, and expanding metrics beyond DOIs.
Emerging Earth Observation methods for monitoring sustainable food productionCIFOR-ICRAF
Presented by Daniela Requena Suarez, Helmholtz GeoResearch Center Potsdam (GFZ) at "Side event 60th sessions of the UNFCCC Subsidiary Bodies - Sustainable Bites: Innovating Low Emission Food Systems One Country at a Time" on 13 June 2024
Exploring low emissions development opportunities in food systemsCIFOR-ICRAF
Presented by Christopher Martius (CIFOR-ICRAF) at "Side event 60th sessions of the UNFCCC Subsidiary Bodies - Sustainable Bites: Innovating Low Emission Food Systems One Country at a Time" on 13 June 2024
Monitor indicators of genetic diversity from space using Earth Observation dataSpatial Genetics
Genetic diversity within and among populations is essential for species persistence. While targets and indicators for genetic diversity are captured in the Kunming-Montreal Global Biodiversity Framework, assessing genetic diversity across many species at national and regional scales remains challenging. Parties to the Convention on Biological Diversity (CBD) need accessible tools for reliable and efficient monitoring at relevant scales. Here, we describe how Earth Observation satellites (EO) make essential contributions to enable, accelerate, and improve genetic diversity monitoring and preservation. Specifically, we introduce a workflow integrating EO into existing genetic diversity monitoring strategies and present a set of examples where EO data is or can be integrated to improve assessment, monitoring, and conservation. We describe how available EO data can be integrated in innovative ways to support calculation of the genetic diversity indicators of the GBF monitoring framework and to inform management and monitoring decisions, especially in areas with limited research infrastructure or access. We also describe novel, integrative approaches to improve the indicators that can be implemented with the coming generation of EO data, and new capabilities that will provide unprecedented detail to characterize the changes to Earth’s surface and their implications for biodiversity, on a global scale.
From pixels to point clouds - Using drones,game engines and virtual reality to model and map the National Arboretum in Canberra
1. From pixels to point clouds - Using drones, game engines and virtual
reality to model and map the National Arboretum in Canberra
Tim Brown, PhD
Director, Australian Plant Phenomics Facility (APPF), ANU
Research Fellow: ARC Centre for Plant Energy Biology, ANU
Ellen Levingston; Gareth Dunstone; Argyl McGhie, Aly Weirman, Zac Hatfield-Dodds,
Justin Borevitz – Borevitz Lab, CoE PEB, ANU Research School of Biology
Darrell Burkey - ProUav
2. Drones (UAV’s UAS) are key part of the emerging toolset for “NextGen”
monitoring of the environment
1. UAV’s (drones)
2. New Sensors: Mesh-networks, Hyperspectral, thermal, micron-resolution
dendrometers; Raspberry Pi-based phenocams & sensors
3. Gigapixel imaging – billion pixel resolution panoramic timelapse images
4. LiDAR – aerial and ground based
Measure the E in the field for Genetics x Environment = Phenotype
3. • Quad/Hexrotor with RGB imaging
• ~$1,200 - $8,000AUD
• 15-25 mn flight time, 0.250 – 10KG payload
• Typically 12MP 4K camera
• Fixed wing (30-60min flight time)
• TurnKey: eBee ($25-30K USD) ; <0.5kg payload
• DIY: <$1000
• Huge variations from small foam craft to multi-kilo gas plane
• NEW: VTOL systems – combines benefits of fixed wing and ‘copter
• Ideal platform for digital agriculture?
• <$5K
• Vertical takeoff/landing (no runway)
• Gas powered flight after takeoff /quad rotor backup for safety
• 6kg payload / likely ~1hr forward flight
UAV Hardware
DJI Phantom 4 (2016)
DJI Mavic Pro (2017)
4. Camera options
• RGB
• Many options from onboard (Phantom – 12MP) -> GoPro’s -> DIY Raspberry pi -> DSLR’s
• Can’t use any camera – watch for rolling shutter
• Multi-spectral (NDVI): MicaSense Sequoia ($3,500USD)/ RedEdge ($5900)
Other sensors (Expensive & produce more challenging data)
• Hyperspectral:
• Headwall Nano ~ $53K AUD (~$5-$8K quadcopter to fly it)
• Line scanner (~800 bands)
• Spectra: ~400 – 1000nm
• Thermal: ~750 – 1300nm
Sequoia NDVI
5. 3D reconstruction software
Typical outputs:
• Orthomosaic images /map layers (google maps, mapbox, etc)
• DEM/Geotiffs
• (semi) classified outputs (i.e. ground removal for trees)
• RGB, multispectral and NDVI indices
• 3D point clouds
• Meshed 3D models
DJI Inspire @ National Arboretum Reconstructed 3D view
6. 3D reconstruction software options
• Desktop processing typically requires min. ~ $1500 AUD PC
• Pix4dMapper Pro: ~$2,000USD (academic); $8,700USD commercial (or $3500/yr)
• Academic ~$1,000E/yr support license required after first year (2016 cost)
• Windows only
• Pro license required to automate or run on linux server or cloud (e.g. amazon)
• Agisoft photoscan ($60 - $550 USD - Academic)
• Windows, Mac OS X, Debian/Ubuntu
• (Python) scripting possible in Pro version and cloud
• Mosaic Mill RapidTerrain (~ € 4,500 [2013 pricing])
• VisualFSM - Free; Win/Mac/Linux; Scriptable
Online Options (Great for testing; smaller areas or occasional flights; low costs for single flights; fast processing)
• https://www.skycatch.com/
• https://dronemapper.com/
• https://www.dronedeploy.com
DJI Inspire @ National Arboretum Reconstructed 3D view
Disclaimer
• This is not a complete list!
• Prices vary by features &
change frequently – check vendors for current pricing
(more on 3D reconstruction later)
7. Putting all the tools together:
National Arboretum Phenomic & Environmental Sensor Array
National Arboretum, Canberra, Australia
Collaboration with Cris Brack, Albert Van Dijk (ANU Fenner school); Borevitz Lab
• Ideal location
• 5km from ANU (64 Mbps Wi-Fi) and near many research institutions
• Great site for testing experimental monitoring systems prior to more remote
deployments
• Forest is only ~7 yrs. old
• Chance to monitor it from birth into the future!
7/2
0
8. National Arboretum Sensor Array
• 20-node Wireless mesh sensor network (10min sample interval)
• Temp, Humidity
• Sunlight (PAR)
• Soil Temp and moisture @ 20cm depth
• uM resolution dendrometers on 20 trees
• Campbell weather stations (baseline data for verification)
• Two Gigapixel timelapse cameras:
• Leaf/growth phenology for > 1,000 trees
• LIDAR: DWEL / Zebedee
• UAV overflights (~monthly)
• Georectified image layers
• High resolution DEM
• 3D point cloud of site in time-series
• Sequence tree genomes
Environment
Phenotype
Genetics
2017: Hyperspectral and Thermal PTZ
10. • Goals
• Test and develop time-series drone monitoring program
Outputs:
• Time-series of
• Georectified image layers
• High resolution 3D point cloud
• Phenotypes:
• Tree height
• “Area”
• Color data
• Forest stats:
• 12 forests of Spotted gum C. maculata &
Iron Bark (Euc. Tricarpa)
• Planted ~ 2012
• ~ 4 Hectares
ANU research forest drone monitoring
11. We’ve come a long way
April 2013
Cell phone duct taped
To my home made drone
March 2017
Darrell Burkey (ProUAV)
Matrice 600 Pro
16MP camera & Sequoia multispec camera
12. Workflow – Pix4D
1. Detect aerial position of every image
2. Calculate matching “control points” in overlapping images
16. Forestutils – Point clouds to trees
20min drone flight (~200 images) + 3hrs processing (pix4d) => 3D forest
3D Point clouds online: http://traitcapture.org/pointclouds
• Forestutils software (Tim Brown, Zac Hattfield-dodds)
https://pypi.python.org/pypi/forestutils
• Group pixels per tree
• Assumes open canopy (until tree locations are known)
• Output:
• Tree height, top-down area, location, RGB colors, “point count”
• CSV of tree data -> google maps online
17. Data management
• Decide on your data management plan BEORE you start
surveys
• Consider the entire workflow
• Who will do the surveys
• If more than one person or airframe, how does the data get to
you
• What then?
• How do you name things?
• If people are uploading data, how do you assure it all made it?
• If you have to duplicate projects (i.e. move to a processing
computer or SSD) – how do you track this and name things?
• Best to enforce rigorous note taking – shared gdocs and
notepad++ are handy
18. yyyy-mm-mm-location-site-whocaptured-status
• Nac = National Arboretum Canberra
• anuf = ANU Forest plot
• Tim = I captured it
• DU = Done and uploaded
But these filenames are too long for windows!
Other options
• Readme files – but you need to maintain them
• Shared google docs are good
19. • Don’t panic! - this is not easy for anyone
• You won’t get things right the first time, just keep working at it,
but don’t wait too long to fix things
Data management
D
VISION Reality
Data Management
20. Some challenges with drone data
• Processing settings and ground truthing
• Much of this data has never been collected before – how do we
know what we are measuring and tie data values to biologically
meaningful things
• How do you ground-truth a 3D point cloud of a tree?
• Different settings produce very different outputs, what do they
mean?
• And outputs change with each software update!
Time for Point Cloud
Densification
16m:11s 16m:12s 16m:10s 16m:36s 05m:42s 05m:45s 14m:59s 05m:25s 55m:08s 43m:15s 42m:38s 43m:33s 14m:14s
Number of 3D
Densified Points
11,754,319 11742976 11805987 11780055 2826949 3032111 8736144 2824205 44,070,407 44011684 43718764 43932956 11727714
Average Density
per m
1000.17 993.56 1022.85 1020.6 311.48 318.51 999.33 301.44 2016.22 1967.43 1818.92 1879.37 953.39
Table: Outputs from various Pix4D settings using the same initial images
21. Choosing the right tool for the job
• Drones (particularly for point clouds) are best suited for smaller
areas (<20Hectares).
• Example – Full arboretum survey - 250 Hectares
• Many week planning
• 4 flight days
• Requires multiple staff on site and 5-10 volunteers
• 8,800 RGB images
• >12,000 Sequoia images
• Outputs
• 2months manual processing (couldn’t run on any machines)
• Point cloud: 584 million points
• Consider weather, distance to site, accessibility, time of day, etc.
• Sometimes a camera is a better option; or satellite!
• Also, if you have a lot of sites it may be cheaper to hire a
helicopter or plane…
22. The full arboretum point cloud
• Download here: https://traitcapture.org/pointclouds/by-
id/59363e0fb43fdd50b9c08395 (or go to
traitcapture.org/pointclouds)
23. Choose the right tool for the job
Unmanned aircraft systems in wildlife research: current and future applications of a
transformative technology. Front Ecol Environ 2016; 14(5): 241–251, doi:10.1002/fee.1281
Manned aerial surveys UAS surveys
Notes
1. Surveys were conducted by the National Oceanic and Atmospheric Administration.
Purpose of surveys Estimate the abundance of Steller sea
lions in the inner Aleutians
Estimate the abundance of Steller sea
lions in the outer Aleutians
Cost per day $4700 per day including fuel and pilot, or
$400 per site
$3000 per day based on the cost of vessel
support, or $1700 per site
Type of aircraft NOAA Twin Otter APH-22 hexacopter
Distance/area surveyed 2500 km of coastline, including the Gulf
of Alaska and part of Aleutians; 210 sites
surveyed
400 km of coastline along the western
Aleutian chain, 30 sites surveyed;
maximum distance from the vessel was
634 m, longest flight was 16 minutes
24. The future…
• Self driving drone swarms – fly your forest or field daily
• All data uploaded and processed in the cloud
• Near realtime analytics and reporting to the end user
• Xaircraft.cn and revolutionag.com.au
26. Developing a drone program
Drones and sensors are a crucial component for field monitoring and phenotyping but launching a good drone
program is hard
• Lots of technologies available so it is important to focus on deliverables:
• Who are your stakeholders / “customers”
• What do they need to know? - What would be “actionable information” for them
• Working back from the deliverables, what is required?
• Make sure to do full costing: Hardware is pretty cheap but this isn’t the full cost
• How big is the area to cover? If multiple areas, how far apart are they?
• How frequently do you need to survey?
• If using custom or expensive hardware – does it work? What are the processing pipelines? Is it insured? Repairable?
• What is the weather like; what other access or similar issues might you have
• Don’t forget the details: Basics like disk space (both in the field and lab) and battery charging are very important
• Start small and get your pipeline working and documented before doing larger projects
• Make checklists and data flows
• Do some dry runs before scaling up
• UAV hardware
• Is a UAV the correct tech? If so, which one?
• Do you buy your our own hardware or just subcontract?
• Processing data:
• Outsource or do it yourself?
• If you need to develop novel tools, can you collaborate with groups? Share it with others?
• Data management
• Don’t underestimate how hard this is
• Have a data management plan in place at the start
27. Future Vision
• Data standards and open data are very important
• Imagine what we could do with a searchable system that
pulled from every drone flights, field trial and data layer
we had and then all those available internationally
• Cross-global integrated search
(https://search.descarteslabs.com/)
• This is for image patterns, but it could just as well be for any
GxExP indices we can come up with measure
• This requires:
• Open data
• Open standards and open code
• Very aggressive pursuit of collaborations and building networks with
other national-scale monitoring networks to develop open data
standards and interoperable web-services globally
28. The Future – NexGen visualization
“OK, so we have all this data, now how do we see it?”
• Our current tools for data visualization and management (e.g.
Excel, Matlab, R, GIS) aren’t sufficient for Phenomics datasets
• Phenomics / HTP data is
• Time-series
• Geospatial and 3D
• Highly varied: pixels, numeric timeseries, point clouds, hyperspectral,
genetics
• Highly dense: 100s’ – 1000s of
data layers / pixel / leaf
• Human insight is amazing but only if
your data is in a format that lets you
think about it clearly
• Many of the current challenges in HTP
derive from lack of tools that lets us
best use this data
29. Point cloud viewing tools in development
Ajay Limaye (developer of Drishti) @ NCI VisLab:
• Timelapse co-located pointcloud viewer: Windows and in VR
• Visualize time-series point clouds of any type in any resolution
• Co-visualization of many datasets if GPS is correct
• Adam Steer et al – Realtime Point cloud viewing online via
OGC and open source software stack
See: Steer, Adam, et al. "An open, interoperable,
transdisciplinary approach to a point cloud
data service using OGC standards and
open source software." EGU Vol. 19. 2017.
http://pointclouds.nci.org.au/pointwps.howto.html
30. EcoVR: Virtual 3D Ecosystems Project
GIS for 3D “time-series data”
Goal: Use modern gaming software to explore new methods for visualizing time-
series environmental data
• Collaboration with
• ANU Computer Science Dept. TechLauncher students
• Stuart Ramsden & Ayjay Limaye, ANU VISlab
• Historic and real-time data layers integrated into persistent 3D
model of the national arboretum in the Unreal gaming engine
• Visualize any field site with GPS and time-series data
31. Future-Tech
Virtual Reality (VR) and Augmented Reality (AR)
• Virtual Reality (VR) and Augmented Reality (AR) will radically
change how we interact with our data
• VR (Oculus, VIVE, etc): View your field site virtually
• AR/MR (Hololens, Magic Leap): Add virtual content to the real
world = view your field site on your desk & view data on your plants
in the field
• Magic Leap:
• Est. value 2015: $500M USD; 2016: $4.5 billion; 2017: $8 Billion
Search: “Magic Leap Wired”
32. Gaming and animation software for data visualization
• Hollywood and the gaming industry have spent billions of dollars
creating tools for generating realistic environments
• We can use these tools for data visualization
• Houdini software:
34. This is just the beginning
Atari 2600 “Adventure” circa 1980
“Skyrim” circa 2011
We are at the “ATARI” stage in VR
Our ability to measure the world
continuously in 3D has just started
In 10 years, VR/AR will be
indistinguishable from reality.
What will we do with these tools?
How will we create the new
interfaces that enable the next
generation of ecosystem research
35. Thanks and Contacts
Justin Borevitz – Lab Leader Lab web page: http://borevitzlab.anu.edu.au
• Funding:
• APPF / NCRIS
• ARC Center of Excellence in Planet Energy Biology | ARC Linkage 2014
• Arboretum ANU Major Equipment Grant: 2014, 2016
• TraitCapture:
• Gareth Dunstone
• Sarah Namin
• Mohammad Esmaeilzadeh
• Chuong Nguyen; Joel Granados; Kevin Murray; Jiri Fajkus
• Jordan Braiuka
• Pip Wilson; Keng Rugrat; Borevitz Lab
• Arboretum
• http://bit.ly/PESA2014
• Ellen Levingston (drone data processing)
• Cris Brack, Albert VanDijk, Justin Borevitz (PESA Project PI’s)
• UAV data: Darrell Burkey, ProUAV
• 3D site modelling: Pix4D.com / Zac Hatfield Dodds / ANUVR team
• Dendrometers & site infrastructure” Darius Culvenor: Environmental Sensing Systems
• Mesh sensors: EnviroStatus, Alberta, CA
• Gigavision: Jack Adamson
• ANUVR Team
• Yuhao Lui, Zhuoqi Qui, Abishek Kookana, Andrew Kock, Thomas Urwin [2016/17 Team]
• Zena Wolba; Alex Alex Jansons; Isobel Stobo; David Wai [2015/16 Team]
• Contact me:
• tim.brown@anu.edu.au
• http://bit.ly/Tim_ANU
Code: http://github.com/borevitzlab