This document discusses the basic principles of photogrammetry. It defines photogrammetry as obtaining spatial measurements and geometrically reliable products from photographs. It describes the different types of analysis procedures and photogrammetric operations used, from simple to sophisticated digital techniques. It outlines common photogrammetric activities like producing maps, determining heights and elevations, and preparing flight plans. It also details the geometric characteristics of aerial photographs, elements, scales, distortions like relief displacement and parallax.
The document discusses orthorectification and triangulation. It defines orthorectification as the process of removing geometric errors from aerial photographs to produce orthophotos that have consistent scale and orthographic projection like a map but also have photographic detail. Triangulation is defined as determining the location of a point by measuring angles to it from other known points, rather than direct measurements. It can be used to orient aerial photographs and produce 3D point measurements. The document provides details on producing orthophotos using DEM data and forward or backward projection methods. It also gives an example of using triangulation to align a block of aerial images.
This document discusses the different types of aerial cameras used for photogrammetry. It describes single-lens frame cameras which have a fixed lens and film and are classified by their angular field of view. It also covers multi-lens frame cameras which use two or more lenses to simultaneously expose the same area on different films. Strip cameras are described as using a single or two lenses to continuously photograph a film passing over a narrow slit. Finally, panoramic cameras are outlined as providing a horizontal strip of terrain from horizon to horizon by laterally scanning from one side to the other.
This document discusses depth of field and shutter speeds in photography. It defines depth of field as the area of an image that appears acceptably sharp, and explains that the three factors controlling depth of field are distance from the lens, f-stop, and focal length. Shorter focal lengths and smaller f-stops result in greater depth of field, while closer subject distance leads to shallower depth of field. Shutter speed controls the amount of time light can enter the lens, with faster speeds freezing motion and slower speeds allowing blur. Camera and subject movement can both cause unsharp images, so shutter speed must be set appropriately based on lighting and motion.
The SPOT satellite system includes several satellites (SPOT 1-5) operated by France and Belgium to observe and monitor Earth. Key specifications of the SPOT satellites include their launch dates between 1986-2002 using Ariane rockets, orbital parameters, onboard instruments including high-resolution visible and infrared cameras, recording and transmission capabilities, and inclusion of vegetation monitoring instruments on some satellites. The SPOT satellites provide high-resolution optical imagery of Earth to study resources, climate, human activities and natural phenomena.
Spatial resolution refers to the ability to distinguish between two close objects or fine detail in an image. It depends on properties of the imaging system, not just pixel count. Higher spatial resolution means finer details can be distinguished. Pixel count alone does not determine spatial resolution, as color images require interpolation between sensor pixels. Spatial resolution is measured differently for various media like film, digital cameras, microscopes, and more. It affects the ability to distinguish fine detail like gaps in a fence as distance increases.
This document discusses the basic principles of photogrammetry. It defines photogrammetry as obtaining spatial measurements and geometrically reliable products from photographs. It describes the different types of analysis procedures and photogrammetric operations used, from simple to sophisticated digital techniques. It outlines common photogrammetric activities like producing maps, determining heights and elevations, and preparing flight plans. It also details the geometric characteristics of aerial photographs, elements, scales, distortions like relief displacement and parallax.
The document discusses orthorectification and triangulation. It defines orthorectification as the process of removing geometric errors from aerial photographs to produce orthophotos that have consistent scale and orthographic projection like a map but also have photographic detail. Triangulation is defined as determining the location of a point by measuring angles to it from other known points, rather than direct measurements. It can be used to orient aerial photographs and produce 3D point measurements. The document provides details on producing orthophotos using DEM data and forward or backward projection methods. It also gives an example of using triangulation to align a block of aerial images.
This document discusses the different types of aerial cameras used for photogrammetry. It describes single-lens frame cameras which have a fixed lens and film and are classified by their angular field of view. It also covers multi-lens frame cameras which use two or more lenses to simultaneously expose the same area on different films. Strip cameras are described as using a single or two lenses to continuously photograph a film passing over a narrow slit. Finally, panoramic cameras are outlined as providing a horizontal strip of terrain from horizon to horizon by laterally scanning from one side to the other.
This document discusses depth of field and shutter speeds in photography. It defines depth of field as the area of an image that appears acceptably sharp, and explains that the three factors controlling depth of field are distance from the lens, f-stop, and focal length. Shorter focal lengths and smaller f-stops result in greater depth of field, while closer subject distance leads to shallower depth of field. Shutter speed controls the amount of time light can enter the lens, with faster speeds freezing motion and slower speeds allowing blur. Camera and subject movement can both cause unsharp images, so shutter speed must be set appropriately based on lighting and motion.
The SPOT satellite system includes several satellites (SPOT 1-5) operated by France and Belgium to observe and monitor Earth. Key specifications of the SPOT satellites include their launch dates between 1986-2002 using Ariane rockets, orbital parameters, onboard instruments including high-resolution visible and infrared cameras, recording and transmission capabilities, and inclusion of vegetation monitoring instruments on some satellites. The SPOT satellites provide high-resolution optical imagery of Earth to study resources, climate, human activities and natural phenomena.
Spatial resolution refers to the ability to distinguish between two close objects or fine detail in an image. It depends on properties of the imaging system, not just pixel count. Higher spatial resolution means finer details can be distinguished. Pixel count alone does not determine spatial resolution, as color images require interpolation between sensor pixels. Spatial resolution is measured differently for various media like film, digital cameras, microscopes, and more. It affects the ability to distinguish fine detail like gaps in a fence as distance increases.
Diabetic retinopathy is a leading cause of blindness that can be detected through automated analysis of fundus images. The document proposes using support vector machines to build a model that can robustly detect four key features of diabetic retinopathy - hard exudates, soft exudates, microaneurysms, and hemorrhages. The model is trained on a standardized set of fundus images and achieves over 95% accuracy on classification, providing an affordable solution to diagnose a disease affecting many people.
This document discusses how Interferometric Synthetic Aperture Radar (InSAR) works to measure ground deformation. It explains that InSAR uses the phase difference between two SAR images of the same area taken at different times to detect millimeter-scale changes in the distance to ground targets. It provides examples of how InSAR has been used to measure subsidence from earthquakes and other natural hazards. The document also notes some limitations of InSAR related to decorrelation from changes on the ground surface and in the atmosphere between image acquisitions.
Drone flight planning - Principles and PracticesDany Laksono
This document discusses principles of drone flight planning. It explains that flight planning is necessary to ensure drones capture images in the right places and fly safely, especially over large areas. Key aspects of flight planning include the area of interest, desired accuracy, flight height and path, ground control points, image overlap, and drone type. Modern software can automate flight planning by designing paths based on user inputs and calculating flight times and images. Such software can also import map data and avoid obstacles. Overall, proper flight planning is important for safety and obtaining high quality results from drone missions.
The document discusses properties of Landsat satellites and remote sensing data. It provides details on:
- The history and timeline of Landsat satellites and their sensors from Landsat 1 to Landsat 7.
- How Landsat data is processed to convert digital numbers to radiances and reflectances and apply atmospheric corrections.
- How different surface features like vegetation, soil and water absorb electromagnetic radiation differently, enabling their identification in remote sensing imagery.
In this presentation I show my exploration into the use of a Drone, or Small Unmanned Aerial System (sUAS), to perform remote sensing. At a high-level I introduce the concepts to leveraging this platform to generate spatial data for use in a variety of disciplines. I generally discuss the process of collecting and processing data, displace the data generated, discuss a few advantages, and give my perspective on where this platform can be utilized.
This document provides an overview of aerial photogrammetry and discusses ground control points (GCPs) and flight planning. It can be summarized as follows:
1) GCPs are points on the ground with known coordinates that are used to georeference aerial photographs. GCPs increase the overall accuracy of maps produced from aerial photos.
2) Flight planning involves determining the optimal altitude, overlaps between photos, and interval between exposures to ensure full coverage of the target area. Factors like wind and drift must be considered.
3) GCPs can be established before or after photography, and both horizontal and vertical control is needed with an ideal distribution across the project area. At least 3-4 G
Cordinate system and map projection.pdfsamuelzewdu3
This document discusses coordinate systems and map projections. It defines projection as representing the curved Earth on a flat surface, which inevitably causes distortions. It describes geographic and projection coordinate systems, and how Universal Transverse Mercator (UTM) divides the world into zones to allow for linear measurements. Datums define precise starting points for coordinate systems and projections.
This document discusses satellite radar altimetry and its use in measuring sea level. Satellite radar altimetry works by measuring the time it takes for a radar pulse to travel from the satellite to the ocean surface and back. This allows the satellite to calculate the distance to the sea surface and determine sea level when combined with information about the satellite's orbit and position. Multiple satellite missions since 1992 have collected sea level measurements globally every 10 days and found that global mean sea level has been rising over the past few decades. Resources for further information on satellite altimetry and sea level are also provided.
Introduction to MAPS,Coordinate System and Projection SystemNAXA-Developers
This document discusses key concepts in GIS including maps, coordinate systems, map projections, and their application in Nepal. It defines analog and digital maps, and explains that the earth is an ellipsoid rather than a perfect sphere. It introduces geographic and rectangular coordinate systems, and defines map projections as methods to represent the curved earth on a flat surface. The document outlines the Everest ellipsoid and UTM/MUTM projection systems used in Nepal.
IMAGE INTERPRETATION
Act of examining images to identify objects and judge their significance.
Information extraction process from the images.
An interpreter is a specialist trained in study of photography or imagery, in addition to his own discipline.
Aerial photographs and remote Sensing images employ electro magnetic energy as the mean of detecting and measuring target characteristics.
Involves a considerable amount of subjective judgment.
Highly dependent on capability of mind to generalize.
Takes place at different levels of complexity.
Free guide to develop a Web GIS
Sign up for this free minicourse to learn how professional GIS developer build a powerful Web GIS application and save thousands of dollars.
This includes a free download that will tell you the top 7 jobs skills you need for a successful career in GIS.
https://gis-science-school-mapping-your-world.teachable.com/p/free-guide-to-develop-a-web-gis-application
Photogrammetry is the science of obtaining reliable information about physical objects through analyzing photographic images. It involves recording, measuring, and interpreting photographs and electromagnetic radiation. There are two main types: aerial photogrammetry which uses photographs taken from aircraft, and terrestrial photogrammetry which uses ground-based photos. Photogrammetry is used to produce topographic maps and digital terrain models for purposes like architecture, engineering, archaeology and more.
This document provides an overview of remote sensing and image interpretation. It discusses key topics such as the use of maps as models to represent features on Earth, different types of map scales and spatial referencing systems, and how computers are used in map production. It also outlines the process of image interpretation, including levels of interpretation keys and basic elements to examine like size, shape, shadow, tone, color, and texture. Software programs used in map production like ArcGIS and types of data products from remote sensing are also reviewed.
NISAR
NASA-ISRO SAR (NISAR) Mission Science Users Handbook
NASA
National Aeronautics and Space Administration
NASA-ISRO Synthetic Aperture Radar (NISAR)
By
Dr. Pankaj Dhussa
This document discusses stereoscopic vision and its use in aerial photo interpretation. Stereoscopic vision involves using binocular vision to view overlapping photos from two camera positions to perceive 3D depth. Various stereoscopes can be used, like lens stereoscopes suitable for field use. Key measurements for determining object heights from stereo pairs include the average photo base length and differential parallax. Precise stereoplotters and software can digitally recreate stereo models for mapping. Orthophotos rectify photos to show objects in true planimetric positions.
The history of GIS began in 1854 when Dr. John Snow created the first disease map to track a cholera outbreak in London. This marked the beginning of linking data to locations. Modern GIS emerged in the 1960s as computers advanced and allowed storage and analysis of spatial data. The first GIS was created by Roger Tomlinson for the Canada Geographic Information System in 1963. Esri was founded in 1969 and released the first commercial GIS software, ARC/INFO, in 1981, allowing widespread adoption of GIS technology. Today, GIS is widely used across many fields to analyze spatial relationships and make informed decisions.
Digital elevation models (DEMs) represent the bare earth topographic surface digitally. The document discusses DEMs, including what they are, common data sources like SRTM and NED with their resolutions, and applications like estimating elevation and slope, determining drainage networks and watersheds, and using DEMs in hydrological modeling. DEMs are frequently used in terrain analysis, hydrology modeling, creating relief maps and contour maps, and 3D visualization and flight planning.
The document discusses various concepts relating to vertical aerial photographs, including:
- Focal length is the distance from the focal plane to the center of the camera lens, and the angle of coverage increases as focal length decreases.
- Fiducial marks define the coordinate axes and geometric center of an individual photograph. The principal point is the intersection of the fiducial marks.
- There are three important photo centers: the principal point, nadir, and isocenter. Different types of distortion and displacement radiate from each center.
- Distortion alters the perspective of images while displacement does not. Lens distortion and tilt displacement are examples discussed in more detail.
SAR is a type of radar which works with antenna and receiver using radio waves which can create two dimension or three dimension of the objects . A synthetic-aperture radar is an imaging radar mounted on a moving platform. SAR gives high resolution data and works 24*7.
A short introduction to reproducible research, reproducibility with R, Docker, and all together for reproducible research using R and Docker containers. Includes demos of Rocker and containerit.
This document provides a summary of MapReduce algorithms. It begins with background on the author's experience blogging about MapReduce algorithms in academic papers. It then provides an overview of MapReduce concepts including the mapper and reducer functions. Several examples of recently published MapReduce algorithms are described for tasks like machine learning, finance, and software engineering. One algorithm is examined in depth for building a low-latency key-value store. Finally, recommendations are provided for designing MapReduce algorithms including patterns, performance, and cost/maintainability considerations. An appendix lists additional MapReduce algorithms from academic papers in areas such as AI, biology, machine learning, and mathematics.
Diabetic retinopathy is a leading cause of blindness that can be detected through automated analysis of fundus images. The document proposes using support vector machines to build a model that can robustly detect four key features of diabetic retinopathy - hard exudates, soft exudates, microaneurysms, and hemorrhages. The model is trained on a standardized set of fundus images and achieves over 95% accuracy on classification, providing an affordable solution to diagnose a disease affecting many people.
This document discusses how Interferometric Synthetic Aperture Radar (InSAR) works to measure ground deformation. It explains that InSAR uses the phase difference between two SAR images of the same area taken at different times to detect millimeter-scale changes in the distance to ground targets. It provides examples of how InSAR has been used to measure subsidence from earthquakes and other natural hazards. The document also notes some limitations of InSAR related to decorrelation from changes on the ground surface and in the atmosphere between image acquisitions.
Drone flight planning - Principles and PracticesDany Laksono
This document discusses principles of drone flight planning. It explains that flight planning is necessary to ensure drones capture images in the right places and fly safely, especially over large areas. Key aspects of flight planning include the area of interest, desired accuracy, flight height and path, ground control points, image overlap, and drone type. Modern software can automate flight planning by designing paths based on user inputs and calculating flight times and images. Such software can also import map data and avoid obstacles. Overall, proper flight planning is important for safety and obtaining high quality results from drone missions.
The document discusses properties of Landsat satellites and remote sensing data. It provides details on:
- The history and timeline of Landsat satellites and their sensors from Landsat 1 to Landsat 7.
- How Landsat data is processed to convert digital numbers to radiances and reflectances and apply atmospheric corrections.
- How different surface features like vegetation, soil and water absorb electromagnetic radiation differently, enabling their identification in remote sensing imagery.
In this presentation I show my exploration into the use of a Drone, or Small Unmanned Aerial System (sUAS), to perform remote sensing. At a high-level I introduce the concepts to leveraging this platform to generate spatial data for use in a variety of disciplines. I generally discuss the process of collecting and processing data, displace the data generated, discuss a few advantages, and give my perspective on where this platform can be utilized.
This document provides an overview of aerial photogrammetry and discusses ground control points (GCPs) and flight planning. It can be summarized as follows:
1) GCPs are points on the ground with known coordinates that are used to georeference aerial photographs. GCPs increase the overall accuracy of maps produced from aerial photos.
2) Flight planning involves determining the optimal altitude, overlaps between photos, and interval between exposures to ensure full coverage of the target area. Factors like wind and drift must be considered.
3) GCPs can be established before or after photography, and both horizontal and vertical control is needed with an ideal distribution across the project area. At least 3-4 G
Cordinate system and map projection.pdfsamuelzewdu3
This document discusses coordinate systems and map projections. It defines projection as representing the curved Earth on a flat surface, which inevitably causes distortions. It describes geographic and projection coordinate systems, and how Universal Transverse Mercator (UTM) divides the world into zones to allow for linear measurements. Datums define precise starting points for coordinate systems and projections.
This document discusses satellite radar altimetry and its use in measuring sea level. Satellite radar altimetry works by measuring the time it takes for a radar pulse to travel from the satellite to the ocean surface and back. This allows the satellite to calculate the distance to the sea surface and determine sea level when combined with information about the satellite's orbit and position. Multiple satellite missions since 1992 have collected sea level measurements globally every 10 days and found that global mean sea level has been rising over the past few decades. Resources for further information on satellite altimetry and sea level are also provided.
Introduction to MAPS,Coordinate System and Projection SystemNAXA-Developers
This document discusses key concepts in GIS including maps, coordinate systems, map projections, and their application in Nepal. It defines analog and digital maps, and explains that the earth is an ellipsoid rather than a perfect sphere. It introduces geographic and rectangular coordinate systems, and defines map projections as methods to represent the curved earth on a flat surface. The document outlines the Everest ellipsoid and UTM/MUTM projection systems used in Nepal.
IMAGE INTERPRETATION
Act of examining images to identify objects and judge their significance.
Information extraction process from the images.
An interpreter is a specialist trained in study of photography or imagery, in addition to his own discipline.
Aerial photographs and remote Sensing images employ electro magnetic energy as the mean of detecting and measuring target characteristics.
Involves a considerable amount of subjective judgment.
Highly dependent on capability of mind to generalize.
Takes place at different levels of complexity.
Free guide to develop a Web GIS
Sign up for this free minicourse to learn how professional GIS developer build a powerful Web GIS application and save thousands of dollars.
This includes a free download that will tell you the top 7 jobs skills you need for a successful career in GIS.
https://gis-science-school-mapping-your-world.teachable.com/p/free-guide-to-develop-a-web-gis-application
Photogrammetry is the science of obtaining reliable information about physical objects through analyzing photographic images. It involves recording, measuring, and interpreting photographs and electromagnetic radiation. There are two main types: aerial photogrammetry which uses photographs taken from aircraft, and terrestrial photogrammetry which uses ground-based photos. Photogrammetry is used to produce topographic maps and digital terrain models for purposes like architecture, engineering, archaeology and more.
This document provides an overview of remote sensing and image interpretation. It discusses key topics such as the use of maps as models to represent features on Earth, different types of map scales and spatial referencing systems, and how computers are used in map production. It also outlines the process of image interpretation, including levels of interpretation keys and basic elements to examine like size, shape, shadow, tone, color, and texture. Software programs used in map production like ArcGIS and types of data products from remote sensing are also reviewed.
NISAR
NASA-ISRO SAR (NISAR) Mission Science Users Handbook
NASA
National Aeronautics and Space Administration
NASA-ISRO Synthetic Aperture Radar (NISAR)
By
Dr. Pankaj Dhussa
This document discusses stereoscopic vision and its use in aerial photo interpretation. Stereoscopic vision involves using binocular vision to view overlapping photos from two camera positions to perceive 3D depth. Various stereoscopes can be used, like lens stereoscopes suitable for field use. Key measurements for determining object heights from stereo pairs include the average photo base length and differential parallax. Precise stereoplotters and software can digitally recreate stereo models for mapping. Orthophotos rectify photos to show objects in true planimetric positions.
The history of GIS began in 1854 when Dr. John Snow created the first disease map to track a cholera outbreak in London. This marked the beginning of linking data to locations. Modern GIS emerged in the 1960s as computers advanced and allowed storage and analysis of spatial data. The first GIS was created by Roger Tomlinson for the Canada Geographic Information System in 1963. Esri was founded in 1969 and released the first commercial GIS software, ARC/INFO, in 1981, allowing widespread adoption of GIS technology. Today, GIS is widely used across many fields to analyze spatial relationships and make informed decisions.
Digital elevation models (DEMs) represent the bare earth topographic surface digitally. The document discusses DEMs, including what they are, common data sources like SRTM and NED with their resolutions, and applications like estimating elevation and slope, determining drainage networks and watersheds, and using DEMs in hydrological modeling. DEMs are frequently used in terrain analysis, hydrology modeling, creating relief maps and contour maps, and 3D visualization and flight planning.
The document discusses various concepts relating to vertical aerial photographs, including:
- Focal length is the distance from the focal plane to the center of the camera lens, and the angle of coverage increases as focal length decreases.
- Fiducial marks define the coordinate axes and geometric center of an individual photograph. The principal point is the intersection of the fiducial marks.
- There are three important photo centers: the principal point, nadir, and isocenter. Different types of distortion and displacement radiate from each center.
- Distortion alters the perspective of images while displacement does not. Lens distortion and tilt displacement are examples discussed in more detail.
SAR is a type of radar which works with antenna and receiver using radio waves which can create two dimension or three dimension of the objects . A synthetic-aperture radar is an imaging radar mounted on a moving platform. SAR gives high resolution data and works 24*7.
A short introduction to reproducible research, reproducibility with R, Docker, and all together for reproducible research using R and Docker containers. Includes demos of Rocker and containerit.
This document provides a summary of MapReduce algorithms. It begins with background on the author's experience blogging about MapReduce algorithms in academic papers. It then provides an overview of MapReduce concepts including the mapper and reducer functions. Several examples of recently published MapReduce algorithms are described for tasks like machine learning, finance, and software engineering. One algorithm is examined in depth for building a low-latency key-value store. Finally, recommendations are provided for designing MapReduce algorithms including patterns, performance, and cost/maintainability considerations. An appendix lists additional MapReduce algorithms from academic papers in areas such as AI, biology, machine learning, and mathematics.
Hadoop and HBase experiences in perf log projectMao Geng
This document discusses experiences using Hadoop and HBase in the Perf-Log project. It provides an overview of the Perf-Log data format and architecture, describes how Hadoop and HBase were configured, and gives examples of using MapReduce jobs and HBase APIs like Put and Scan to analyze log data. Key aspects covered include matching Hadoop and HBase versions, running MapReduce jobs, using column families in HBase, and filtering Scan results.
Greg Hogan – To Petascale and Beyond- Apache Flink in the CloudsFlink Forward
http://flink-forward.org/kb_sessions/to-petascale-and-beyond-apache-flink-in-the-clouds/
Apache Flink performs with low latency but can also scale to great heights. Gelly is Flink’s laboratory for building and tuning scalable graph algorithms and analytics. In this talk we’ll discuss writing algorithms optimized for the Flink architecture, assembling and configuring a cloud compute cluster, and boosting performance through benchmarking and system profiling. This talk will cover recent developments in the Gelly library to include scalable graph generators and a mixed collection of modular algorithms written with native Flink operators. We’ll think like a data stream, keep a cool cache, and send the garbage collector on holiday. To this we’ll add a lightweight benchmarking harness to stress and validate core Flink and to identify and refactor hot code with aplomb.
Software engineering research often requires analyzing
multiple revisions of several software projects, be it to make and
test predictions or to observe and identify patterns in how software evolves. However, code analysis tools are almost exclusively designed for the analysis of one specific version of the code, and the time and resources requirements grow linearly with each additional revision to be analyzed. Thus, code studies often observe a relatively small number of revisions and projects. Furthermore, each programming ecosystem provides dedicated tools, hence researchers typically only analyze code of one language, even when researching topics that should generalize
to other ecosystems. To alleviate these issues, frameworks and models have been developed to combine analysis tools or automate the analysis of multiple revisions, but little research has gone into actually removing redundancies in multi-revision, multi-language code analysis. We present a novel end-to-end approach that systematically avoids redundancies every step of the way: when reading sources from version control, during parsing, in the internal code representation, and during the actual analysis. We evaluate our open-source implementation, LISA, on the full
history of 300 projects, written in 3 different programming languages, computing basic code metrics for over 1.1 million program revisions. When analyzing many revisions, LISA requires less than a second on average to compute basic code metrics for all files in a single revision, even for projects consisting of millions of lines of code.
R is an open-source statistical programming language that can be used for data analysis and visualization. The document provided an introduction to R including how to install R, create variables, import and assemble data, perform basic statistical analyses like t-tests and linear regression, and create plots and graphs. Key functions and concepts introduced included using c() to combine values into vectors, reading in data from CSV files, using lm() for linear regression, and the basic plot() function.
The document discusses Intel Threading Building Blocks (TBB), a C++ template library for parallel programming. TBB provides features like parallel_for to simplify parallelizing loops across CPU cores without managing threads directly. It uses generic programming principles and provides common parallel algorithms, concurrent data structures, and synchronization primitives to make parallel programming more accessible. TBB aims to improve both correctness through avoiding race conditions and performance through efficient hardware utilization.
The document discusses Intel Threading Building Blocks (TBB), a C++ template library for parallel programming. TBB provides features like parallel_for to simplify parallelizing loops across CPU cores without needing expertise in threads. It uses generic programming principles and provides common parallel algorithms, concurrent data structures, and task scheduling to make parallel programming more accessible and scalable. The example shows converting a serial velocity update loop to parallel using TBB.
These are the slides to the webinar about Custom Pregel algorithms in ArangoDB https://youtu.be/DWJ-nWUxsO8. It provides a brief introduction to the capabilities and use cases for Pregel.
Standardizing on a single N-dimensional array API for PythonRalf Gommers
MXNet workshop Dec 2020 presentation on the array API standardization effort ongoing in the Consortium for Python Data API Standards - see data-apis.org
Reproducible Computational Research in RSamuel Bosch
A short presentation with pointers on getting started with reproducible computational research in R. Some of the topics include git, R package development, document generation with R markdown, saving plots, saving tables and using packrat.
Echtzeitapplikationen mit Elixir und GraphQLMoritz Flucht
Wir stellen unsere Erfahrung vor, die wir in 15 Monaten Einsatz von Elixir und GraphQL in Produktion nach Relaunch eines großen
Jobanzeigenportals gesammelt haben.
Elixir bietet uns die Möglichkeit, ein hoch verfügbares Backend mit extrem geringen Antwortzeiten zu entwickeln. Dieses wird über eine GraphQL Schnittstelle von mehreren Frontends genutzt.
Elixir ist eine junge, funktionale Programmiersprache, die 2011 vorgestellt wurde. Jedoch setzt sie auf dem Erlang Ökosystem auf, welches in über 32 Jahren eine extrem stabile Basis zur Entwicklung von Anwendungen geworden ist.
Ein häufiger Anwendungsfall von Elixir sind Echtzeitapplikationen, zum Beispiel Chatanwendungen, Bots und IoT-Anwendungen. In Verbindung mit GraphQL Subscriptions ist es einfach möglich, Clients über Statusaktualisierungen vom Server zu Informieren. Wir zeigen, wofür man Elixir einsetzen kann, wie GraphQL einen leichten Einstieg in Datenexploration anbietet und wieso beide zusammen eine Überlegung wert sind.
This document summarizes a presentation on writing image processing algorithms using Python raster functions in ArcGIS. It introduces the concept of raster functions and raster models as a way to chain raster functions together. It then discusses how to build raster functions using Python by implementing various callback methods to interact with raster data. The presentation demonstrates applying a Compound Topographic Index (CTI) raster function and a site suitability analysis raster model to image layers in ArcGIS. It also provides additional considerations for optimizing performance, publishing functions, and collaborating on GitHub.
Utah Code Camp, Spring 2016. http://utahcodecamp.com In this presentation I describe modern C++. Modern C++ assumes features introduced in the C++11/14 standard. An overview of the new features is presented and some idioms for mdoern C++ based on those features are presented.
The document discusses various methods for reading data into R from different sources:
- CSV files can be read using read.csv()
- Excel files can be read using the readxl package
- SAS, Stata, and SPSS files can be imported using the haven package functions read_sas(), read_dta(), and read_sav() respectively
- SAS files with the .sas7bdat extension can also be read using the sas7bdat package
Code is not text! How graph technologies can help us to understand our code b...Andreas Dewes
Today, we almost exclusively think of code in software projects as a collection of text files. The tools that we use (version control systems, IDEs, code analyzers) also use text as the primary storage format for code. In fact, the belief that “code is text” is so deeply ingrained in our heads that we never question its validity or even become aware of the fact that there are other ways to look at code.
In my talk I will explain why treating code as text is a very bad idea which actively holds back our understanding and creates a range of problems in large software projects. I will then show how we can overcome (some of) these problems by treating and storing code as data, and more specifically as a graph. I will show specific examples of how we can use this approach to improve our understanding of large code bases, increase code quality and automate certain aspects of software development.
Finally, I will outline my personal vision of the future of programming, which is a future where we no longer primarily interact with code bases using simple text editors. I will also give some ideas on how we might get to that future.
This document provides an overview of using R and high performance computers (HPC). It discusses why HPC is useful when data becomes too large for a local machine, and strategies like moving to more powerful hardware, using parallel packages, or rewriting code. It also covers topics like accessing HPC resources through batch jobs, setting up the R environment, profiling code, and using packages like purrr and foreach to parallelize workflows. The overall message is that HPC can scale up R analyses, but developers must adapt their code for parallel and distributed processing.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
1. Pronto Raster: A C++ library
for Map Algebra
Alex Hagen-Zanker
a.hagen-zanker@surrey.ac.uk
2. This presentation
• Motivation
• Map Algebra
• Library requirements
• Main design elements
• Raster / Range concepts
• Expression Templates
• Elementary spatial transforms
• Generic moving window analysis
• Use of the library
• Applications
https://github.com/ahhz/raster
3. Motivation: Map Algebra
Local operations Focal operations Zonal operations
3
4
7
+
=
1 3 2
5
2
13
Σ A B A B
A B B B
A A A B
4 2 2 4
1 2 1 2
3 4 3 2
=
Σ
Zone Σ
A 17
B 13
=
https://github.com/ahhz/raster
4. Motivation: Existing tools
• Geodata management: GDAL, Proj, GeoTIFF, etc.
• Image processing: OpenCV, GIL, etc.
• Linear algebra: Eigen, Blitz, UBLAS, etc.
• Scripting languages: ArcGIS Python, PC Raster,
R-raster, Geotrellis
• GIS software: Raster Calculator and specific tools in
QGIS, SAGA, GRASS, ArcGIS, etc.
All of these facilitate Map Algebra operations,
but none meets all (my) requirements
https://github.com/ahhz/raster
5. Motivation: User requirements
Requirement Key challenge
Customizable Be able to apply custom local, focal and zonal functions on any
number of layers
Efficient Minimal creation of temporary datasets for intermediary results
Usable Idiomatic interface, minimal use of boiler plate code
Scalable Work with large raster data sets that exceed available memory
Flexible Work with wide range of data formats without preprocessing
Compatible Be able to use in conjunction with other libraries (e.g. for
optimization, simulation control, etc.)
https://github.com/ahhz/raster
Researcher / consultant working
in geocomputational domain who
does not like being constrained by
built-in methods
6. Design: Range / Raster concepts
Range*
Gives access to a sequence of
values through iterator access.
Unlike a Container does not
have to store its own values
Range View
A Range that can be copied O(1)
and is meant to be passed to
functions by value
Raster
A Range with known number of
rows and columns.
Additionally, must give access to
sub-raster .
Raster View
A Raster that is also a View.
https://github.com/ahhz/raster*Official proposal to C++ standard, see also https://ericniebler.github.io/range-v3/
8. Design: Range / Raster concepts
#include <pronto/raster/io.h>
namespace pr = pronto::raster;
int main()
{
auto ras = pr::open<int>("my_file.tif")
int sum = 0;
for(auto&& v : ras)
{
sum += v;
}
}
Use GDAL to open a raster
dataset as a model of the
Raster View concept
Use range-based for-loop
to iterate over all values in
the raster
https://github.com/ahhz/raster
9. Design: Range / Raster concepts
#include <pronto/raster/io.h>
namespace pr = pronto::raster;
int main()
{
auto ras = pr::open<int>("my_file.tif")
int sum = 0;
for(auto&& v : ras.sub_raster(2,3,10,20))
{
sum += v;
}
}
Iterate over subset of the
raster. First cell at (2,3)
then 10 rows and 20
columns
https://github.com/ahhz/raster
10. Design: Expression Templates
• Local operations take Raster Views as input and
produce an ET that also is a Raster View.
• Input and output conform to the same concept:
highly composable
• Cell values of the ET are only calculated once they
are iterated over
• No need to allocate memory
• A generic function, transform, is implemented,
other local functions can be implemented in terms
of transform
https://github.com/ahhz/raster
11. Design: Expression Templates
#include <pronto/raster/io.h>
#include <pronto/raster/transform_raster_view.h>
#include <functional>
namespace pr = pronto::raster;
int main()
{
auto a = pr::open<int>("a.tif");
auto b = pr::open<int>("b.tif");
auto sum = pr::transform(std::plus<int>, a, b);
for(auto&& v : sum.sub_raster(2,3,10,20))
{
// do something
}
}
Open two datasets
Apply std::plus to each
cell of both input
datasets
Iterate over a sub-raster of the
Expression Raster and calculate
the required values onlyhttps://github.com/ahhz/raster
12. Design: Expression Templates
auto a = pr::open<int>("a.tif");
auto b = pr::open<int>("b.tif");
auto diff = pr::transform(std::minus<int>, a, b);
auto sqr_lambda = [](int x){return x*x;};
auto sqr_diff = pr::transform(sqr_lambda, diff);
int sum_of_squared_diff = 0;
for(auto&& v : square_diff)
{
sum_of_squared_diff += v;
}
Compose a transform in
multiple steps
Iterate over values,
without allocating extra
memory, or creating a
temporary dataset.
https://github.com/ahhz/raster
13. Design: Expression Templates
#include <pronto/raster/io.h>
#include <pronto/raster/raster_algebra_operators.h>
#include <pronto/raster/raster_algebra_wrapper.h>
namespace pr = pronto::raster;
int main()
{
auto a = pr::raster_algebra_wrap(br::open<int>("a.tif"));
auto b = pr::raster_algebra_wrap(br::open<int>("b.tif"));
auto square_diff = (a – b) * (a – b);
for(auto&& v : square_diff)
{
// do something
}
}
Wrap a and b in raster_algebra_wrapper
to allow use of overloaded operators
Iterate over values.
https://github.com/ahhz/raster
14. Design: Elementary spatial operators
https://github.com/ahhz/raster
Function Effect
pad add const-valued leading and trailing rows and
columns with an unmutable value
sub_raster as required by the Raster concept
offset returns value found at a fixed spatial offset in the
input RasterView
h_edge / v_edge iterate over all horizontally / vertically adjacent cell
pairs
15. Design: Generalized moving windows
• Moving window analysis
is a common type of Focal
Operation
• Each cell obtains the
value of a summary
statistics calculated for
cells in a surrounding
window
• Efficient implementations
exploit overlap in
windows of adjacent cells
• Moving window indicator
is itself a Raster View
https://github.com/ahhz/raster
16. Circular window of cells Circular window of edges
Square window of edgesSquare window of cells
Design: Generalized moving windows
O(r x n)
r = radius
n = raster size
O(n)
n = raster size
https://github.com/ahhz/rasterHagen-Zanker, AH (2016) A computational framework for generalized moving windows and its application to landscape
pattern analysis International Journal of Applied Earth Observation and Geoinformation.
17. Design: Generalized moving windows
• Generalized windows based on Visitor design
pattern:
• add(element, weight)
• subtract(element, weight)
• add(subtotal, weight)
• subtract(subtotal, weight)
• extract()
• Can be used with different moving window methods
• Can also be used for zonal statistics
https://github.com/ahhz/raster
18. Design: Generalized moving windows
#include <pronto/raster/moving_window_indicator.h>
#include <pronto/raster/indicator/mean.h>
namespace pr = pronto::raster;
int main()
{
auto in = pr::open<int>("my_file.tif");
auto window = pr::circle(2);
auto indicator = pr::mean_generator<int>{};
auto out = pr::moving_window_indicator(in, window, indicator);
for(auto&& v : out)
{
// do something
}
}
Calculate the mean indicator for
a circular window with radius 2
"out" is a Raster View and hence
is used as such
https://github.com/ahhz/raster
19. Design: Assign to force evaluation
#include <pronto/raster/assign.h>
#include <pronto/raster/io.h>
#include <cmath>
namespace pr = pronto::raster;
int main()
{
auto in = pr::open<int>("my_file.tif");
auto sqrt = pr::transform([](int x){return std::sqrt(x);}, in);
auto out = pr::create_from_model<float>("sqrt.tif", in);
pr::assign(b, out);
}
Cheap: Create ET for the sqrt of
values in “my_file.tif”
https://github.com/ahhz/raster
Expensive: Create “sqrt.tif” and
write values
20. Design: use std::optional for nodata
values
https://github.com/ahhz/raster
#include <pronto/raster/io.h>
#include <pronto/raster/nodata_transform.h>
#include <pronto/raster/plot_raster.h>
namespace pr = pronto::raster;
int main()
{
auto in = pr::open<int>("demo.tif");
auto nodata = pr::nodata_to_optional(in, 6);
auto un_nodata = pr::optional_to_nodata(nodata,
-99);
}
Value type: int
0 3 6 2 5
1 4 0 3 6
2 5 1 4 0
3 6 2 5 1
Value type: std::optional<int>
0 3 - 2 5
1 4 0 3 -
2 5 1 4 0
3 - 2 5 1
Value type: int
0 3 -99 2 5
1 4 0 3 -99
2 5 1 4 0
3 -99 2 5 1
Providing an idiomatic way for functions to
skip over missing values or to leave cells
unspecified
21. Design: Runtime polymorphism
through type erasure
• Runtime polymorphism is sometimes required
• operations specified by user (e.g. Raster Calculator)
• value type of raster dataset unknown at compile time
Type erased class Holds Is a Raster View?
any_raster<T> Raster View object
with value type T
Yes
any_blind_raster any_raster<T>, where
T is a supported type
Has: rows(), cols(), sub_raster(…)
Lacks: begin(), end()
https://github.com/ahhz/raster
22. Design: Runtime polymorphism
through type erasure
pr::any_raster<int> plus_or_minus(bool do_plus)
{
auto a = pr::open<int>("a.tif");
auto b = pr::open<int>("b.tif");
if(do_plus) {
auto x = pr::transform(std::plus<int>, a, b));
return pr::make_any_raster(x);
} else {
auto y = pr::transform(std::minus<int>, a, b));
return pr::make_any_raster(y);
}
}
Runtime decision
x and y are different types;
but both are Raster Views
with int as value type
https://github.com/ahhz/raster
23. Design: Runtime polymorphism
through type erasure
pr::any_blind_raster a_in= pr::open_any("a.tif");
pr::any_blind_raster b_in= pr::open_any("b.tif");
auto a = pr::raster_algebra_wrap(a_in);
auto b = pr::raster_algebra_wrap(b_in);
auto c = (b-a) * (b-a);
pr::export_any("out.tif", c.unwrap());
Raster algebra made to work
with any_blind_raster
Exported to the
appropriate value type
Opened as the appropriate
value type
Unwrap from
raster_algebra_wrapper
https://github.com/ahhz/raster
24. Current state of the library
• Still relatively new
• Consolidating existing functionality before extending
• Documentation
• Majority of library is documented
• Including stand-alone examples for most functions
• Testing
• Using Google Test
• Still poor coverage, but solid core and expanding
• Benchmarks
• Still minimal
• Comparing against scripting languages (good)
• Comparing against direct C++ implementation (reasonable,
depending on type of access, optimal is read-only, forward-only, and
compile-time polymorphism)
https://github.com/ahhz/raster
25. Future music: wish list
• Transpose function
• Vertical / Horizontal flip function
• Mosaic function
• Parallel processing
• Sparse rasters?
• Many more users and an active community!
https://github.com/ahhz/raster
26. Future music: Parallel processing
• The sub_raster() requirement of Raster Views
combined with the Expression Template design
allows doing calculation for only a part of the raster
• For local operations and generalized moving
windows only the assign function needs to be
parallelized
• Split the rasters into sub-raster blocks
• Assign all blocks asynchronously
• And… rest of library needs to be thread safe (esp.
raster data access)
https://github.com/ahhz/raster
28. Conclusions
Requirement Key challenge Solution
Customizable Be able to apply custom local, focal
and zonal functions on any number
of layers
• Transform function
• Indicator concept
• Generalized moving windows
Efficient Minimal creation of temporary
datasets for intermediary results
• Using Expression Template
• Using non-copying spatial
operations
• Potential for parallelization
Usable Idiomatic interface, minimal use of
boiler plate code
• Range based for-loop
• std::optional for NODATA value
Scalable Work with large raster data sets that
exceed available memory
• Uses GDAL buffering and LRU
Flexible Work with wide range of data
formats without preprocessing
• Uses GDAL for data access
• RAII
Compatible Be able to use in conjunction with
other libraries (e.g. for optimization,
simulation control, etc.)
• C++
• Simple concepts with standard
interfaces