Tutorial presentation providing an overview of extracting geospatial features from scanned historic maps in an automated fashion using Python, OpenCV and PostGIS.
Este documento contiene varios poemas y cuentos cortos que practican el uso de diferentes letras y sonidos del alfabeto español. Los textos describen a niños, animales y otras criaturas y sus actividades cotidianas de una manera simple y entretenida para practicar la lectoescritura en estudiantes principiantes.
O gato Glutão gostava muito de comer e seus donos queriam se livrar dele. Ele comeu um porco, um limpa-chaminés, noivos e convidados de casamento e a lua, afirmando estar apto a comer mais. Quando tentou comer o sol, sua barriga explodiu por estar muito cheia.
Este documento contém vários poemas curtos sobre animais, plantas e aspectos da natureza. Os poemas descrevem as características e comportamentos de borboletas, joaninhas, abelhas, minhocas e outros seres vivos de forma lúdica e poética.
Este documento contém letras de canções infantis populares em português sobre animais, despedidas e boas-vindas. Inclui canções sobre borboletas, pintinhos, cachorros e formigas, além de músicas para dizer olá e adeus.
Este documento presenta un método global llamado HABLA-M para enseñar la letra "ll" a estudiantes. Describe 11 pasos que incluyen trabajar la forma de la letra, palabras que la contienen, frases con imágenes, lectura, escritura, mayúsculas y minúsculas, sílabas, grafemas y evaluación. El método se centra en las preferencias y ritmo de aprendizaje del estudiante.
Este documento presenta una adaptación del cuento popular "Ricitos de Oro y los tres osos". Narra cómo Ricitos de Oro, mientras jugaba en el bosque, encuentra la casa de tres osos y prueba su sopa, se sienta en sus sillas y duerme en sus camas sin permiso. Cuando los osos regresan, descubren lo ocurrido y asustan a Ricitos de Oro, quien huye corriendo y nunca más vuelve a la casa de los tres osos.
La Unión Europea ha acordado un embargo petrolero contra Rusia en respuesta a su invasión de Ucrania. El embargo prohibirá la mayoría de las importaciones de petróleo ruso a la UE y se implementará de manera gradual durante los próximos seis meses. Esto formará parte de un sexto paquete de sanciones de la UE contra Rusia destinado a aumentar la presión económica sobre el gobierno de Putin.
Este documento contiene varios poemas y cuentos cortos que practican el uso de diferentes letras y sonidos del alfabeto español. Los textos describen a niños, animales y otras criaturas y sus actividades cotidianas de una manera simple y entretenida para practicar la lectoescritura en estudiantes principiantes.
O gato Glutão gostava muito de comer e seus donos queriam se livrar dele. Ele comeu um porco, um limpa-chaminés, noivos e convidados de casamento e a lua, afirmando estar apto a comer mais. Quando tentou comer o sol, sua barriga explodiu por estar muito cheia.
Este documento contém vários poemas curtos sobre animais, plantas e aspectos da natureza. Os poemas descrevem as características e comportamentos de borboletas, joaninhas, abelhas, minhocas e outros seres vivos de forma lúdica e poética.
Este documento contém letras de canções infantis populares em português sobre animais, despedidas e boas-vindas. Inclui canções sobre borboletas, pintinhos, cachorros e formigas, além de músicas para dizer olá e adeus.
Este documento presenta un método global llamado HABLA-M para enseñar la letra "ll" a estudiantes. Describe 11 pasos que incluyen trabajar la forma de la letra, palabras que la contienen, frases con imágenes, lectura, escritura, mayúsculas y minúsculas, sílabas, grafemas y evaluación. El método se centra en las preferencias y ritmo de aprendizaje del estudiante.
Este documento presenta una adaptación del cuento popular "Ricitos de Oro y los tres osos". Narra cómo Ricitos de Oro, mientras jugaba en el bosque, encuentra la casa de tres osos y prueba su sopa, se sienta en sus sillas y duerme en sus camas sin permiso. Cuando los osos regresan, descubren lo ocurrido y asustan a Ricitos de Oro, quien huye corriendo y nunca más vuelve a la casa de los tres osos.
La Unión Europea ha acordado un embargo petrolero contra Rusia en respuesta a su invasión de Ucrania. El embargo prohibirá la mayoría de las importaciones de petróleo ruso a la UE y se implementará de manera gradual durante los próximos seis meses. Esto formará parte de un sexto paquete de sanciones de la UE contra Rusia destinado a aumentar la presión económica sobre el gobierno de Putin.
Do Cosmos a Terra: Usando Python para desvendar os mistérios do Universo.Eduardo S. Pereira
This document discusses using Python to explore the mysteries of the universe. It introduces cosmological and astrophysical concepts like the cosmic star formation rate and supermassive black holes. It presents the PyCosmicStar code for modeling the cosmic star formation rate using different dark matter halo mass functions. Wavelet coherence analysis is also demonstrated for studying connections between signals like the sun and Earth.
Introduction to OpenCV with python (at taichung.py)Max Lai
The document introduces OpenCV with Python. It discusses OpenCV which is an open source computer vision library that supports languages like C, C++, Python and Java. It has over 2500 algorithms and is cross platform. The document then discusses image processing concepts like image I/O, smoothing, edge detection and histogram equalization using OpenCV. It also discusses face detection in images using OpenCV APIs and provides references for further reading.
This manual is “How to Build” manual for OpenCV with OpenCL for Android.
If you want to “Use OpenCL on OpenCV” ONLY,
Please see
http://github.com/noritsuna/OpenCVwithOpenCL4AndroidNDKSample
This document provides instructions for setting up OpenCV 3.1.0 to work with Visual Studio 2015 on Windows 10. It describes downloading and installing OpenCV, configuring system environment variables to include OpenCV's bin and lib directories, creating a sample OpenCV project in Visual Studio, and adding necessary include directories and library dependencies to display an image. Following these steps allows one to begin developing OpenCV applications using Visual Studio.
This document provides instructions for installing OpenCV 2.0 and Dev C++ and configuring the compiler settings and directories in Dev C++ to allow OpenCV code to compile. Key steps include installing OpenCV 2.0 and Dev C++, adding OpenCV compiler options, and configuring the binaries, libraries, and include directories to point to the OpenCV install locations. The document also provides instructions for a sample cvTest.cpp program to display an image using OpenCV.
This document discusses mining smartphone sensor data using Python. It describes how smartphones have various sensors like accelerometers and gyroscopes that can provide data on movement and location. The talk focuses on collecting accelerometer data from a smartphone, examining raw data samples, extracting features from segmented data windows, and using those features to classify activities through machine learning. The goal is to demonstrate how to work with smartphone sensor data and classify activities like walking, running and using stairs.
OpenCV 3.0 plans to focus on API changes to improve the C++ interface and deprecate the C API. It will add new functionality and modules while maintaining backwards compatibility. The roadmap includes alpha and beta releases in late 2013 and early 2014 with a final 3.0 release. Acceleration through hardware abstraction and optimized code for platforms like mobile CUDA and OpenCL is a priority.
The document provides a history of mobile phones and smartphones from 1973 to present day. It discusses key milestones like the first mobile call in 1973, the introduction of SMS messaging, and the rise of smartphones. It then summarizes the development of open source mapping applications like Geopaparazzi for Android, including its features for notes, photos, GPS logs and spatial data. Examples of projects using Geopaparazzi are also mentioned.
Face Recognition with OpenCV and scikit-learnShiqiao Du
A lightweight implementation of Face Recognition system with Python. OpenCV and scikit-learn.
Python, OpenCv, scikit-learnによる簡易な顔認識システムの実装. Tokyo.Scipy5にて発表。
The document discusses an OpenCV C++ workshop presented by Lentin Joseph. It provides an overview of OpenCV, including that it is an open source computer vision library started in 1999. It then covers installing OpenCV from source or Ubuntu packages, setting it up in Eclipse, and various OpenCV modules and applications like gesture recognition, segmentation, and face detection. Examples are provided of OpenCV APIs for reading images and video, image processing techniques, and contour detection.
Open Source Computer Vision (OpenCV) is a BSD-licensed open source library for computer vision and image processing. The document outlines OpenCV's capabilities including image enhancement, object classification and tracking, and face detection and recognition. It provides examples of using OpenCV in C++ and Python to load and display images, detect faces, and enhance images. The document concludes that OpenCV is a cross-platform library with over 2,000 algorithms for computer vision and image processing tasks.
Text analytics in Python and R with examples from Tobacco ControlBen Healey
This document discusses text analytics techniques for summarizing and analyzing unstructured text documents, with examples from analyzing documents related to tobacco control. It covers data cleaning and standardization steps like removing punctuation, stopwords, stemming, and deduplication. It also discusses frequency analysis using document-term matrices, topic modeling using LDA, and unsupervised and supervised classification techniques. The document provides examples analyzing posts from new users versus highly active users on an online forum, identifying topics and comparing topic distributions between different user groups.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2015-embedded-vision-summit-opencv
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Gary Bradski, President and CEO of the OpenCV Foundation, presents the "OpenCV Open Source Computer Vision Library: Latest Developments" tutorial at the May 2015 Embedded Vision Summit.
OpenCV is an enormously popular open source computer vision library, with over 9 million downloads. Originally used mainly for research and prototyping, in recent years OpenCV has increasingly been used in deployed products on a wide range of platforms from cloud to mobile.
The latest version, OpenCV 3.0 is currently in beta, and is a major overhaul, bringing OpenCV up to modern C++ standards and incorporating expanded support for 3D vision. The new release also introduces a modular “contrib” facility that enables independently developed modules to be quickly integrated with OpenCV as needed, providing a flexible mechanism to allow developers to experiment with new techniques before they are officially integrated into the library.
In this talk, Gary Bradski, head of the OpenCV Foundation, provides an insider’s perspective on the new version of OpenCV and how developers can utilize it to maximum advantage for vision research, prototyping, and product development.
Image processing with OpenCV allows various techniques to manipulate digital images. Some key techniques include smoothing to remove noise, erosion and dilation to diminish or accentuate features, and edge detection algorithms like Sobel, Laplace, and Canny to find edges. The core OpenCV module provides functions for accessing pixel values, adjusting contrast and brightness, and drawing shapes. Feature detection identifies keypoints like edges, corners, and blobs, then describes the details around them for later matching against other images. Common algorithms include SURF, SIFT, and BRIEF for feature extraction and description and FLANN and BruteForce for feature matching.
The document discusses computer vision and deep learning using OpenCV. It introduces video content analysis applications in various domains like entertainment, healthcare, retail, etc. It describes techniques for motion detection, tracking, identification and behavior analysis in video. It also discusses frameworks for action and event detection. Finally, it recommends resources for deep learning using OpenCV and describes large datasets like Open Images and YouTube-8M that are useful for training computer vision and deep learning models on images and video.
Text Classification in Python – using Pandas, scikit-learn, IPython Notebook ...Jimmy Lai
Big data analysis relies on exploiting various handy tools to gain insight from data easily. In this talk, the speaker demonstrates a data mining flow for text classification using many Python tools. The flow consists of feature extraction/selection, model training/tuning and evaluation. Various tools are used in the flow, including: Pandas for feature processing, scikit-learn for classification, IPython, Notebook for fast sketching, matplotlib for visualization.
The document summarizes an OpenCV based image processing attendance system. It discusses using OpenCV to detect faces in images and recognize faces by comparing features to a database. The key steps are face detection using Viola-Jones detection, face recognition using eigenfaces generated by principal component analysis to project faces into "face space", and measuring similarity by distance between projections.
This document provides steps to set up OpenCV 3.2.0 with CodeBlocks on Windows. It details downloading and installing OpenCV, tdm-gcc, CodeBlocks, and CMake. It then walks through configuring the tools in CodeBlocks to use the OpenCV libraries and build a sample OpenCV application to read and display an image.
The document discusses OpenCV, an open source computer vision and machine learning software library. It provides instructions for compiling OpenCV 3.2 on Windows 10 with Visual Studio 2015, an overview of OpenCV modules for tasks like image processing, video analysis, and machine learning, and examples of how to set up a basic OpenCV project in Visual Studio and write a simple program to read and display an image.
The Matrix: connecting and re-using digital records of archaeological investi...Keith.May
The document describes a presentation on the Matrix project, which aims to improve the reusability of stratigraphic data from archaeological investigations for Bayesian chronological modeling. It discusses issues with consistency and standards in digital archives of stratigraphic data. It presents a prototype tool called Phaser that was developed to explore converting legacy stratigraphic matrix data to JSON format. The presentation covers topics like process modeling for stratigraphic analysis, use of linked open data vocabularies, and how the prototype tool could be used for phasing records and Bayesian modeling. The overall goal of the project is to increase the findability, accessibility, interoperability and reusability of archaeological data.
Do Cosmos a Terra: Usando Python para desvendar os mistérios do Universo.Eduardo S. Pereira
This document discusses using Python to explore the mysteries of the universe. It introduces cosmological and astrophysical concepts like the cosmic star formation rate and supermassive black holes. It presents the PyCosmicStar code for modeling the cosmic star formation rate using different dark matter halo mass functions. Wavelet coherence analysis is also demonstrated for studying connections between signals like the sun and Earth.
Introduction to OpenCV with python (at taichung.py)Max Lai
The document introduces OpenCV with Python. It discusses OpenCV which is an open source computer vision library that supports languages like C, C++, Python and Java. It has over 2500 algorithms and is cross platform. The document then discusses image processing concepts like image I/O, smoothing, edge detection and histogram equalization using OpenCV. It also discusses face detection in images using OpenCV APIs and provides references for further reading.
This manual is “How to Build” manual for OpenCV with OpenCL for Android.
If you want to “Use OpenCL on OpenCV” ONLY,
Please see
http://github.com/noritsuna/OpenCVwithOpenCL4AndroidNDKSample
This document provides instructions for setting up OpenCV 3.1.0 to work with Visual Studio 2015 on Windows 10. It describes downloading and installing OpenCV, configuring system environment variables to include OpenCV's bin and lib directories, creating a sample OpenCV project in Visual Studio, and adding necessary include directories and library dependencies to display an image. Following these steps allows one to begin developing OpenCV applications using Visual Studio.
This document provides instructions for installing OpenCV 2.0 and Dev C++ and configuring the compiler settings and directories in Dev C++ to allow OpenCV code to compile. Key steps include installing OpenCV 2.0 and Dev C++, adding OpenCV compiler options, and configuring the binaries, libraries, and include directories to point to the OpenCV install locations. The document also provides instructions for a sample cvTest.cpp program to display an image using OpenCV.
This document discusses mining smartphone sensor data using Python. It describes how smartphones have various sensors like accelerometers and gyroscopes that can provide data on movement and location. The talk focuses on collecting accelerometer data from a smartphone, examining raw data samples, extracting features from segmented data windows, and using those features to classify activities through machine learning. The goal is to demonstrate how to work with smartphone sensor data and classify activities like walking, running and using stairs.
OpenCV 3.0 plans to focus on API changes to improve the C++ interface and deprecate the C API. It will add new functionality and modules while maintaining backwards compatibility. The roadmap includes alpha and beta releases in late 2013 and early 2014 with a final 3.0 release. Acceleration through hardware abstraction and optimized code for platforms like mobile CUDA and OpenCL is a priority.
The document provides a history of mobile phones and smartphones from 1973 to present day. It discusses key milestones like the first mobile call in 1973, the introduction of SMS messaging, and the rise of smartphones. It then summarizes the development of open source mapping applications like Geopaparazzi for Android, including its features for notes, photos, GPS logs and spatial data. Examples of projects using Geopaparazzi are also mentioned.
Face Recognition with OpenCV and scikit-learnShiqiao Du
A lightweight implementation of Face Recognition system with Python. OpenCV and scikit-learn.
Python, OpenCv, scikit-learnによる簡易な顔認識システムの実装. Tokyo.Scipy5にて発表。
The document discusses an OpenCV C++ workshop presented by Lentin Joseph. It provides an overview of OpenCV, including that it is an open source computer vision library started in 1999. It then covers installing OpenCV from source or Ubuntu packages, setting it up in Eclipse, and various OpenCV modules and applications like gesture recognition, segmentation, and face detection. Examples are provided of OpenCV APIs for reading images and video, image processing techniques, and contour detection.
Open Source Computer Vision (OpenCV) is a BSD-licensed open source library for computer vision and image processing. The document outlines OpenCV's capabilities including image enhancement, object classification and tracking, and face detection and recognition. It provides examples of using OpenCV in C++ and Python to load and display images, detect faces, and enhance images. The document concludes that OpenCV is a cross-platform library with over 2,000 algorithms for computer vision and image processing tasks.
Text analytics in Python and R with examples from Tobacco ControlBen Healey
This document discusses text analytics techniques for summarizing and analyzing unstructured text documents, with examples from analyzing documents related to tobacco control. It covers data cleaning and standardization steps like removing punctuation, stopwords, stemming, and deduplication. It also discusses frequency analysis using document-term matrices, topic modeling using LDA, and unsupervised and supervised classification techniques. The document provides examples analyzing posts from new users versus highly active users on an online forum, identifying topics and comparing topic distributions between different user groups.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2015-embedded-vision-summit-opencv
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Gary Bradski, President and CEO of the OpenCV Foundation, presents the "OpenCV Open Source Computer Vision Library: Latest Developments" tutorial at the May 2015 Embedded Vision Summit.
OpenCV is an enormously popular open source computer vision library, with over 9 million downloads. Originally used mainly for research and prototyping, in recent years OpenCV has increasingly been used in deployed products on a wide range of platforms from cloud to mobile.
The latest version, OpenCV 3.0 is currently in beta, and is a major overhaul, bringing OpenCV up to modern C++ standards and incorporating expanded support for 3D vision. The new release also introduces a modular “contrib” facility that enables independently developed modules to be quickly integrated with OpenCV as needed, providing a flexible mechanism to allow developers to experiment with new techniques before they are officially integrated into the library.
In this talk, Gary Bradski, head of the OpenCV Foundation, provides an insider’s perspective on the new version of OpenCV and how developers can utilize it to maximum advantage for vision research, prototyping, and product development.
Image processing with OpenCV allows various techniques to manipulate digital images. Some key techniques include smoothing to remove noise, erosion and dilation to diminish or accentuate features, and edge detection algorithms like Sobel, Laplace, and Canny to find edges. The core OpenCV module provides functions for accessing pixel values, adjusting contrast and brightness, and drawing shapes. Feature detection identifies keypoints like edges, corners, and blobs, then describes the details around them for later matching against other images. Common algorithms include SURF, SIFT, and BRIEF for feature extraction and description and FLANN and BruteForce for feature matching.
The document discusses computer vision and deep learning using OpenCV. It introduces video content analysis applications in various domains like entertainment, healthcare, retail, etc. It describes techniques for motion detection, tracking, identification and behavior analysis in video. It also discusses frameworks for action and event detection. Finally, it recommends resources for deep learning using OpenCV and describes large datasets like Open Images and YouTube-8M that are useful for training computer vision and deep learning models on images and video.
Text Classification in Python – using Pandas, scikit-learn, IPython Notebook ...Jimmy Lai
Big data analysis relies on exploiting various handy tools to gain insight from data easily. In this talk, the speaker demonstrates a data mining flow for text classification using many Python tools. The flow consists of feature extraction/selection, model training/tuning and evaluation. Various tools are used in the flow, including: Pandas for feature processing, scikit-learn for classification, IPython, Notebook for fast sketching, matplotlib for visualization.
The document summarizes an OpenCV based image processing attendance system. It discusses using OpenCV to detect faces in images and recognize faces by comparing features to a database. The key steps are face detection using Viola-Jones detection, face recognition using eigenfaces generated by principal component analysis to project faces into "face space", and measuring similarity by distance between projections.
This document provides steps to set up OpenCV 3.2.0 with CodeBlocks on Windows. It details downloading and installing OpenCV, tdm-gcc, CodeBlocks, and CMake. It then walks through configuring the tools in CodeBlocks to use the OpenCV libraries and build a sample OpenCV application to read and display an image.
The document discusses OpenCV, an open source computer vision and machine learning software library. It provides instructions for compiling OpenCV 3.2 on Windows 10 with Visual Studio 2015, an overview of OpenCV modules for tasks like image processing, video analysis, and machine learning, and examples of how to set up a basic OpenCV project in Visual Studio and write a simple program to read and display an image.
The Matrix: connecting and re-using digital records of archaeological investi...Keith.May
The document describes a presentation on the Matrix project, which aims to improve the reusability of stratigraphic data from archaeological investigations for Bayesian chronological modeling. It discusses issues with consistency and standards in digital archives of stratigraphic data. It presents a prototype tool called Phaser that was developed to explore converting legacy stratigraphic matrix data to JSON format. The presentation covers topics like process modeling for stratigraphic analysis, use of linked open data vocabularies, and how the prototype tool could be used for phasing records and Bayesian modeling. The overall goal of the project is to increase the findability, accessibility, interoperability and reusability of archaeological data.
This document provides a whirlwind tour of GIS concepts in 25 slides. It defines GIS as geographical information science and discusses data capture methods like surveys and remote sensing. It explores analysis and visualization techniques, different GIS platforms, common spatial phenomena modeled in GIS, and modeling approaches. The document also covers GIS history, software, data types, attributes, overlay operations, coordinate reference systems, common file formats, data storage, open source GIS, web GIS, and potential future directions for GIS including location-based services and cloud computing.
This document provides a whirlwind tour of GIS concepts in 25 slides. It defines GIS as geographical information science and discusses data capture techniques including remote sensing and sensor networks. It explores analysis and visualization of spatial data in 2D and 3D maps and how visualization can enable further analysis. The document also briefly outlines the history of GIS software and formats, as well as concepts like spatial data types, attributes, modeling frameworks, coordinate reference systems, and industry standard and open source GIS tools. It concludes with discussions of future directions for GIS including location-based services, sensors, cloud computing, and social implications.
This document provides a whirlwind tour of GIS concepts in 25 slides. It defines GIS as geographical information science and discusses data capture methods like remote sensing and GPS. It explains how spatial data can be analyzed and visualized in 2D and 3D maps. Common data types in GIS like vector and raster data are introduced along with concepts like attributes, overlay operations, and coordinate reference systems. Popular GIS software like ArcGIS and open source options are overviewed. The document concludes by discussing emerging areas in GIS like web mapping, mobile apps, sensor networks, and cloud computing.
This document provides a whirlwind tour of GIS concepts in 25 slides. It defines GIS as geographical information science and discusses data capture methods like surveys and remote sensing. It explains how GIS allows for analysis and visualization of spatial data in 2D and 3D maps. Key aspects of GIS covered include its history, common data types of vector and raster, attributes, modeling frameworks, data storage, open source options, and future directions such as location-based services and cloud computing. The document aims to quickly introduce fundamental GIS concepts.
The document provides an introduction to Geographic Information Systems (GIS) and the open-source GIS software QGIS. It discusses John Snow's 1854 map of a cholera outbreak in London and how it helped establish epidemiology. It then defines GIS and describes common components like data input/output, data models, and editing tools. The document also demonstrates how to perform tasks in QGIS like adding vector and raster layers, importing GPX files, editing shapefiles, creating new layers, merging shapefiles, and filtering/separating data.
Jerry Clough presents techniques for analyzing OpenStreetMap data using QGIS. He discusses using OSM data to simulate the European Urban Atlas project and mapping retail locations. Case studies include analyzing pub density in Britain, simulating land use classification, and tracking street light mapping. Challenges with OSM data like polygon overlaps and tagging variations are also covered.
Jerry Clough presents techniques for analyzing OpenStreetMap data using QGIS. He discusses using OSM data to simulate the European Urban Atlas project and mapping retail locations. Case studies include analyzing pub density in Britain, simulating land use classification, and tracking street light and retail mappings. Challenges with OSM data like polygon overlaps and tagging variations are also covered.
CensusGIV - Geographic Information Visualisation of Census DataCASA, UCL
The document discusses the development of CensusGIV, a prototype for providing innovative geographic visualization of UK census small area statistics. It aims to develop an interactive web-based mapping application using open source technologies to allow users to easily explore and analyze census data through dynamic choropleth maps and other visualization techniques. The document outlines the objectives, design considerations, system architecture, and timeline for the CensusGIV prototype. Key aspects discussed include data access, map creation, color theory, and a modular client-server architecture.
An introduction to GIS Data Types. Strengths and weaknesses of raster and vector data are discussed. Also covered is the importance of topology. Concludes with a discussion of the vector-based format of OpenStreetMap data.
This document provides an introduction to Geographic Information Systems (GIS) and geographic data sources. It defines what GIS is, explains why GIS is unique in its ability to handle spatial data and show connections between spatially proximate activities. It describes how GIS works using the layer approach to integrate spatial and attribute data. The document outlines different types of spatial data like raster and vector formats. It discusses various GIS functions such as measurement, analysis of single and multiple objects, and creating new objects through operations like overlay and buffering. Finally, it provides examples of sources of spatial data, attributes, and an exercise for students to find appropriate online geographic datasets for research topics.
This document discusses geospatial analytics and spatial capabilities on big data systems. It covers analyzing movement data through techniques like trajectory analysis and discretization. It discusses operational requirements for analyzing telematics data at large scales. It proposes using Apache Spark and geospatial libraries on Hadoop for distributed processing and storage. Key analytical challenges discussed include snap-to-road matching, trajectory clustering, and traffic event detection. Machine learning techniques like kernel methods and sequence analysis are proposed for solving these challenges.
Astronomical Data Processing on the LSST Scale with Apache SparkDatabricks
The next decade promises to be exciting for both astronomy and computer science with a number of large-scale astronomical surveys in preparation. One of the most important ones is Large Scale Survey Telescope, or LSST. LSST will produce the first ‘video’ of the deep sky in history by continually scanning the visible sky and taking one 3.2 giga-pixel image every 20 seconds. In this talk we will describe LSST’s unique design and how its image processing pipeline produces catalogs of astronomical objects. To process and quickly cross-match catalog data we built AXS (Astronomy Extensions for Spark), a system based on Apache Spark. We will explain its design and what is behind its great cross-matching performance.
Spatial decision support and analytics on a campus scale: bringing GIS, CAD, ...Safe Software
1. The document discusses how spatial decision support and analytics can be applied at the campus scale by integrating various data sources such as GIS, CAD, BIM, and Tableau.
2. A key challenge is that a campus is a complex system with different processes and specialized data silos. The presentation explores using GIS as an enabling technology to create a comprehensive spatial model and dissolve these silos.
3. Examples of spatial decision problems on campus include optimal space assignment and indoor routing. Solutions involve building spatial databases and networks from CAD floor plans to support optimization and scenario analysis.
Euro30 2019 - Benchmarking tree approaches on street dataFabion Kauker
By examining the use of algorithms to solve the Prize Collecting Steiner Tree (PCST) problem we consider the facets which determine effectiveness. Specifically, by measuring a number of solution approaches and comparing them based on metrics. In order to understand the solution approach we must asses why it is useful. Our goal is to determine the effectiveness of Mixed Integer Programming (MIP) and heuristic methods. Utilizing freely available street and address data a base graph representation is created and then computed on. Such that a tree connects every address utilizing the minimum total length of edges from the street network. This is the basis of many approaches used to solve infrastructure problems including telecommunications network design and costing. The analysis is conducted on methods developed by Hegde et al. 2015, Ljubić et al. 2006, and Teitz et al. 1963. We present a data processing architecture, as well as a concise set of results and a framework for assessing the facets and trade-offs for a given approach. In this case the heuristic approaches are proven to have advantages in the simplistic case but fail when more complex requirements are added. This is where the MIP approach is able to capitalize, whilst detrimentally limiting the flexibility due to the strictness and specificity in modelling.
Moving Objects and Spatial Data ComputingKwang Woo NAM
This document summarizes a presentation on moving objects and spatial data computing. It discusses spatial data types including location data from GPS, images, videos, sensors and more. Spatial data is increasingly integrated with multimedia content. Technologies like Google Street View, drones, autonomous vehicles, and black boxes in cars generate large amounts of spatial and multimedia data. Hadoop and Spatial databases like PostGIS are important for analyzing spatial big data from social media.
Accumulo Summit 2016: GeoMesa: Using Accumulo for Optimized Spatio-Temporal P...Accumulo Summit
LocationTech GeoMesa is a project that builds on open-source, distributed databases like Accumulo, HBase, and Cassandra to scale up indexing, querying, and analyzing billions of spatio-temporal data points. GeoMesa uses space-filling curves to index multi-dimensional data in Accumulo, and we'll discuss recent improvements for non-point geometries. Over the two and a half years GeoMesa has been an open-source project, GeoMesa's Accumulo schemas have evolved and our team has had a chance to work through creating and optimizing custom Accumulo iterators. These custom iterators allow for better query performance and interesting aggregations. GeoMesa provides support for distributed processing in Spark via MapReduce input and output formats that extend their Accumulo counterparts. We will discuss the performance benefit gained by reducing the number of default map/Spark tasks created for complex query patterns. The talk will conclude with updates about GeoMesa's integration with Jupyter notebook and improvements to GeoMesa's Spark integration.
– Speaker –
Dr. James Hughes
Mathematician, Commonwealth Computer Research, Inc (CCRi)
Dr. James Hughes is a mathematician at Commonwealth Computer Research, Inc. in Charlottesville, Virginia. He is a core committer for GeoMesa which leverages Accumulo and other distributed database systems to provide distributed computation and query engines. He is a LocationTech committer for GeoMesa, SFCurve, and GeoBench. He serves on the LocationTech Project Management Committee and Steering Committee. Through work with LocationTech and OSGeo projects like GeoTools and GeoServer, he works to build end-to-end solutions for big spatio-temporal problems. He holds a PhD in algebraic topology from the University of Virginia.
— More Information —
For more information see http://www.accumulosummit.com/
Using R to Visualize Spatial Data: R as GIS - Guy LansleyGuy Lansley
This talk demonstrates some of the benefits of using R to visualize spatial data efficiently and clearly.
It was originally presented by Guy Lansley (UCL and the Consumer Data Research Centre) to the GIS for Social Data and Crisis Mapping Workshop at the University of Kent.
This document discusses automated schematization and its application to creating schematic maps from geospatial data. It provides background on map generalization techniques like simplification, amalgamation, elimination, typification, exaggeration and displacement. The document then describes an optimization framework developed by the Centre for Geospatial Science to automate schematization using techniques like hillclimbing, simulated annealing and genetic algorithms. It demonstrates how this framework can simplify geospatial features while enforcing topological and geometric constraints to produce schematic maps from original geospatial datasets.
Similar to Looking into the past - feature extraction from historic maps using Python, OpenCV and PostGIS (20)
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
Learn SQL from basic queries to Advance queriesmanishkhaire30
Dive into the world of data analysis with our comprehensive guide on mastering SQL! This presentation offers a practical approach to learning SQL, focusing on real-world applications and hands-on practice. Whether you're a beginner or looking to sharpen your skills, this guide provides the tools you need to extract, analyze, and interpret data effectively.
Key Highlights:
Foundations of SQL: Understand the basics of SQL, including data retrieval, filtering, and aggregation.
Advanced Queries: Learn to craft complex queries to uncover deep insights from your data.
Data Trends and Patterns: Discover how to identify and interpret trends and patterns in your datasets.
Practical Examples: Follow step-by-step examples to apply SQL techniques in real-world scenarios.
Actionable Insights: Gain the skills to derive actionable insights that drive informed decision-making.
Join us on this journey to enhance your data analysis capabilities and unlock the full potential of SQL. Perfect for data enthusiasts, analysts, and anyone eager to harness the power of data!
#DataAnalysis #SQL #LearningSQL #DataInsights #DataScience #Analytics
Open Source Contributions to Postgres: The Basics POSETTE 2024ElizabethGarrettChri
Postgres is the most advanced open-source database in the world and it's supported by a community, not a single company. So how does this work? How does code actually get into Postgres? I recently had a patch submitted and committed and I want to share what I learned in that process. I’ll give you an overview of Postgres versions and how the underlying project codebase functions. I’ll also show you the process for submitting a patch and getting that tested and committed.
Open Source Contributions to Postgres: The Basics POSETTE 2024
Looking into the past - feature extraction from historic maps using Python, OpenCV and PostGIS
1. Looking into the past -
feature extraction from
historic maps using Python,
OpenCV and PostGIS.
2. ESRC ADRC-S
• Administrative Data Research Centre – Scotland (ADRC-S)
• part of the Administrative Data Research Network (ADRN)
• An ESRC Data Investment
• 12 ADRC-S Work Packages
• EDINA working on WP5 - Provision of Geocoding and Georeferencing
tools
3. What and Why?
• Prof(s) Chris Dibben and Jamie Pearce from UoE GeoSciences
• Effects of past environmental conditions on (longitudinal) population
cohorts
• Trains – where (and which populations) did they run alongside in the past
and bring their air pollution
• Urban - did past populations live in predominantly urban or rural locales –
were these same populations experiencing urbanisation
• Industry - where were particular types of (polluting) industry located?
• Greenspace and Bluespace – e.g. Parks and Water
4. Historic Maps – a record of
past landscapes
• ADRC`s remit is (all) of Scotland.
• Manual capture (digitising) of features from historic maps not going to
scale given resources available.
• Chris and Jamie`s challenge to EDINA – is it possible to automagically
capture features from historic maps?
• Historic maps in Digimap historic
• For the purpose of this work we are using (higher quality) full colour scans
of historic maps provided by Chris Fleet @ NLS
• Mainly been looking at 2 map series provided by NLS
• http://maps.nls.uk/geo/explore/#zoom=15&lat=55.9757&lon=-3.1799&laye
rs=168
• http://maps.nls.uk/geo/explore/#zoom=15&lat=55.9757&lon=-3.1799&laye
rs=10
5. Environment
• Linux (Ubuntu)
• Python (3)
• Virtualenv – isolated Python environments
• PyCharm Python IDE (Community Edition)
• OpenCV – Computer Vision / Image Processing / Image Analysis
• PostgreSQL - Datastore
• PostGIS – Spatial query (analysis) engine
• QGIS – Desktop GIS / PostGIS data viewer
• (a bit of) ArcGIS for ArcScan (Line vectorization)
6. OpenCV
OpenCV (Open Source Computer Vision) is a library of programming
functions mainly aimed at real-time computer vision
7. Python Libraries used
• numpy - numpy (array) data structures central to all other libraries where
we are manipulating image / raster datasets via python
• cv2 - python interface to OpenCV
• Shapely – (GEOS based) package for manipulation and analysis of
planar geometric objects.
• Fiona – (F)ile (i)nput (o)utput (n)o (a)nalysis. An alternative API to OGR to
access and write vector GIS datasets e.g. Shapefiles / GeoJSON.
• Rasterio – Raster (i)nput (o)utput. Rasteio is to raster GIS datasets as
Fiona is to vector GIS datasets.
• Snaql – Keep (templated) SQL query blocks seperate from python code
and render (with context) the query block when needed.
assuming PostGIS, if you add in a map renderer like mapnik, then this lot
gives you everything needed to do geospatial data analysis (raster and
vector), data conversion, data management and map automation.
8. Python OpenCV Demo
• Load image
• Changing colourspaces – convert colour image to greyscale
• Threshold image – partition greyscale image into bilevel foreground
(white) and background (black) regions to simplify things.
• Finding image contours. Contour (lines) seperate foreground regions
from background regions. Having traced contours we can describe
shape/size etc of foreground regions and relationship between
regions.
• Finding patterns / classifying features
9. Apply similar processes to
historic maps to extract
geographic features
(1) Water features (Bluespace)
(2) Railways
(3) Urban Form / Change
10. #15759 – extract 'bluespace'
(1) Water features (Bluespace)
Rivers / Canals / inland water shown as
blue lines or stippled blue areas.
Find contours – each
stipple mark / line forms a
contour
Threshold to isolate blue
pixels
Contours form a
hierarchy. Parents that
hold child contours are
water regions.
11. Method 2
Process breaks down when water regions are
not entirely bound by blue lines or broken by
other features (bridges).
So (alternative method) find every individual
stipple and then forming groups of these gives
water regions.
Apply either of these methods of
capturing blue stippled regions
to other stippled regions e.g.
green stippled regions (parks -
greenspace)
12. Change - old Edinburgh quarries change to
shopping centres or from bluespace to
greenspace!
14. In QGIS digitised polygons
covering groups of features of
interest so we can explore
values of RGB in the underlying
pixels and use to inform colour
seperation processing.
15. Load the training polygons and NLS 3 band
raster into PostGIS and do spatial analysis to
find pixel values in each polygon.
Calculate aggregate
min/max values of
RGB (BGR in
opencv!) across each
feature group and use
these in OpenCV
Python algorithm to
do colour seperation
on the source 25k
image. More pre/post
processing needed.
19. (2) Extracting Railways
Source 1:25,000 NLS Historic Map “black” pixels extracted after
running colour seperation process.
Isolates dashes in railway lines (but
also text/buildings)
20. From dashes to (railway) lines
So do contour tracing and apply
size/shape constraints to isolate the
dashes in the railway lines only.
Join up neighbouring dash
candidates to form railway lines
21. Complications…Process needs refined
to cope with noisier,
more complicated
regions of the map
Not helped that some
small buildings exhibit
similar size/shape
characteristics as
dashes in railway
lines.
A refinement might be
to introduce a look-
ahead constraint that
minimises change in
line direction as
candidates are
grouped since railway
lines don`t make sharp
90 degree turns.
23. Current building footprints
held in OS MasterMap
Lines from historic map
selected as corresponding to
hatched building areas
overlain against OSMM
building footprints
New vs Old (Buildings)
24. The locale of the
Fort public housing
project.
West Bowling
Green Street &
Bowling Green
Street
Examples of
change in
Edinburgh between
ca1900 and today
All change
26. 1. All lines pulled by
from NLS historic map
sheet. No intelligence
about what each line
represents.
Spaghetti!
2. Form groups of hatch
lines.
Criteria for group
membership is: spatial
proximity; direction
(azimuth);
lines are spatially
disjoint; lines are parallel
to one another.
3. Final set of line
groups. These
correspond to building
footprint. Other lines
from the historic map did
not meet group
membership criteria and
thus make no further
contribution to analysis.
4. Derive a pseudo
building polygon for
each group.
Could place an MBR
around them but
instead...
5. … form a Convex
Hull around the lines to
provide a polygon for
this group. For the
historic maps this is the
equivalent of the
building footprint
provided by the OS
MasterMap data.
6. Repeat the % Building
analysis for the complete
set of convex hull
polygons formed from all
groups of hatch lines.
From hatch lines to buildings
27. End product would be a grid describing % building (built-up) across each 100m x
100m standard grid square in ca1900. Data could be aggregated upwards e.g. to
produce a 1km x 1km grid. Using the same sampling grid could compute the same
measure for modern data (I`ve used OS MasterMap but other OS OpenData could
be used). Could then calculate + / - change between ca1900 and today / other
time periods for which historic maps available.
Output data products
28. Process repeated for
whole of Edinburgh
using all 19 NLS map
sheets – urban form
of Edinburgh ca1900.
Scaling up
29. Same 100m x 100m
grid across Edinbrugh
as a whole in ca1900