Revisiting the Representation of and Need for Raw Geometries on the Linked Da...Blake Regalia
Geospatial data on the Semantic Web historically stems from using point geometries to represent the geographic locations of places. As the practice evolved in the Semantic Web community, a demand for more complex geometries and geospatial query capabilities came about as a consequence of integrating traditional GIS and geo-data into the Linked Data cloud. However, recent projects have revealed that, in practice, these established techniques have major shortcomings that limit their storage, transmission, and query potential. In this position paper, we examine these shortcomings, propose to treat geometries similar to how other binary data are stored and referenced on the Semantic Web, namely by representing them as resources via URIs instead of RDF literals, and demonstrate the utility of precomputing topological relations rather than computing them on-demand by arguing that end users are most often interested in topology and not raw geometries.
SWIPE CUBES SOFTS is providing the best quality industrial training in Mohali. Professional and experience faculty members are handle and train students in web development, web designing and digital marketing field.
Revisiting the Representation of and Need for Raw Geometries on the Linked Da...Blake Regalia
Geospatial data on the Semantic Web historically stems from using point geometries to represent the geographic locations of places. As the practice evolved in the Semantic Web community, a demand for more complex geometries and geospatial query capabilities came about as a consequence of integrating traditional GIS and geo-data into the Linked Data cloud. However, recent projects have revealed that, in practice, these established techniques have major shortcomings that limit their storage, transmission, and query potential. In this position paper, we examine these shortcomings, propose to treat geometries similar to how other binary data are stored and referenced on the Semantic Web, namely by representing them as resources via URIs instead of RDF literals, and demonstrate the utility of precomputing topological relations rather than computing them on-demand by arguing that end users are most often interested in topology and not raw geometries.
SWIPE CUBES SOFTS is providing the best quality industrial training in Mohali. Professional and experience faculty members are handle and train students in web development, web designing and digital marketing field.
after a brief intro about indexes present in PostgreSQL, the new feature in PostgreSQL 10 are shown: parallel index scans, hash persistence, BRIN autosummarization, unbalanced support (SP-GiST) for inet data.
Exploratory data analysis in R - Data Science ClubMartin Bago
How to analyse new dataset in R? What libraries to use, and what commands? How to understand your dataset in few minutes? Read my presentation for Data Science Club by Exponea and find out!
Presentation talk at the 45° AGILE Team "STAG" meeting
held in INAF Bologna on 16/12/2008
Short description and schema of the Data Analysis System operatin in the Ground Segment of the SA instrument onboard the AGILE spacecraft
Memory Management for Large-Scale Link Discovery
(HOBBIT project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 688227.)
Ontop: Answering SPARQL Queries over Relational DatabasesGuohui Xiao
We present Ontop, an open-source Ontology-Based Data Access (OBDA) system that allows for querying relational data sources through a conceptual representation of the domain of interest, provided in terms of an ontology, to which the data sources are mapped. Key features of Ontop are its solid theoretical foundations, a virtual approach to OBDA, which avoids materializing triples and is implemented through the query rewriting technique, extensive optimizations exploiting all elements of the OBDA architecture, its compliance to all relevant W3C recommendations (including SPARQL queries, R2RML mappings, and OWL 2 QL and RDFS ontologies), and its support for all major relational databases.
ACT Talk, Giuseppe Totaro: High Performance Computing for Distributed Indexin...Advanced-Concepts-Team
Searching for information within large sets of unstructured, heterogeneous scientific data can be very challenging unless an inverted index has been created in advance. Several solutions, mainly based on the Hadoop ecosystem, have been proposed to accelerate the process of index construction. These solutions perform well when data are already distributed across the cluster nodes involved in the elaboration. On the other hand, the cost of distributing data can introduce noticeable overhead. We propose ISODAC, a new approach aimed at improving efficiency without sacrificing reliability. Our solution reduces to the bare minimum the number of I/O operations by using a stream of in-memory operations to extract and index heterogeneous data. We further improve the performance by using GPUs and POSIX Threads programming for the most computationally intensive tasks of the indexing procedure. ISODAC indexes heterogeneous documents up to 10.6x faster than other widely adopted solutions, such as Apache Spark.
Fragging Rights: A Tale of a Pathological Storage WorkloadEric Sproul
There is a lot to love about ZFS. The simple administration, strong integrity checks, copy-on-write performance, and block-transform features are an operator's dream. This is a story of how a filesystem workload that looked good on a whiteboard turned out to have a darkside when run on ZFS, and how a combination of ZFS improvements and rethinking the application got us out of trouble.
GeoServer in Production: we do it, here is how!GeoSolutions
The presentation will describe how to setup a production system based on GeoServer from the points of view of performance, availability and security. The suggestions will start covering how a single node GeoServer should be prepared for internet usage, tuning logging, connection pools, security, data and JVM preparation, keeping disk, memory and CPU usage in check within the limits of the available resources. We’ll then move to tools used to monitor the production instances, ranging from probes to request auditing and watch-dogs. Finally the presentation will cover setting up a cluster of server and the strategies for keeping them in synch, from the traditional multi-tier setup (testing vs production) to the systems that need to keep an ever evolving catalog of layers constantly on-line and in synch.
After the construction of several datalakes and large business intelligence pipelines, we now know that the use of Scala and its principles were essential to the success of those large undertakings.
In this talk, we will go through the 7 key scala-based architectures and methodologies that were used in real-life projects. More specifically, we will see the impact of these recipes on Spark performances, and how it enabled the rapid growth of those projects.
Slides on my presentation of a Particle Swarm approach to portfolio selection, that introduces a new particle initialization procedure based on dynamic systems' theory . Held in the series "Applied Micro Seminar -- Graduate School of Economics" at Kyoto University, December 2016.
after a brief intro about indexes present in PostgreSQL, the new feature in PostgreSQL 10 are shown: parallel index scans, hash persistence, BRIN autosummarization, unbalanced support (SP-GiST) for inet data.
Exploratory data analysis in R - Data Science ClubMartin Bago
How to analyse new dataset in R? What libraries to use, and what commands? How to understand your dataset in few minutes? Read my presentation for Data Science Club by Exponea and find out!
Presentation talk at the 45° AGILE Team "STAG" meeting
held in INAF Bologna on 16/12/2008
Short description and schema of the Data Analysis System operatin in the Ground Segment of the SA instrument onboard the AGILE spacecraft
Memory Management for Large-Scale Link Discovery
(HOBBIT project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 688227.)
Ontop: Answering SPARQL Queries over Relational DatabasesGuohui Xiao
We present Ontop, an open-source Ontology-Based Data Access (OBDA) system that allows for querying relational data sources through a conceptual representation of the domain of interest, provided in terms of an ontology, to which the data sources are mapped. Key features of Ontop are its solid theoretical foundations, a virtual approach to OBDA, which avoids materializing triples and is implemented through the query rewriting technique, extensive optimizations exploiting all elements of the OBDA architecture, its compliance to all relevant W3C recommendations (including SPARQL queries, R2RML mappings, and OWL 2 QL and RDFS ontologies), and its support for all major relational databases.
ACT Talk, Giuseppe Totaro: High Performance Computing for Distributed Indexin...Advanced-Concepts-Team
Searching for information within large sets of unstructured, heterogeneous scientific data can be very challenging unless an inverted index has been created in advance. Several solutions, mainly based on the Hadoop ecosystem, have been proposed to accelerate the process of index construction. These solutions perform well when data are already distributed across the cluster nodes involved in the elaboration. On the other hand, the cost of distributing data can introduce noticeable overhead. We propose ISODAC, a new approach aimed at improving efficiency without sacrificing reliability. Our solution reduces to the bare minimum the number of I/O operations by using a stream of in-memory operations to extract and index heterogeneous data. We further improve the performance by using GPUs and POSIX Threads programming for the most computationally intensive tasks of the indexing procedure. ISODAC indexes heterogeneous documents up to 10.6x faster than other widely adopted solutions, such as Apache Spark.
Fragging Rights: A Tale of a Pathological Storage WorkloadEric Sproul
There is a lot to love about ZFS. The simple administration, strong integrity checks, copy-on-write performance, and block-transform features are an operator's dream. This is a story of how a filesystem workload that looked good on a whiteboard turned out to have a darkside when run on ZFS, and how a combination of ZFS improvements and rethinking the application got us out of trouble.
GeoServer in Production: we do it, here is how!GeoSolutions
The presentation will describe how to setup a production system based on GeoServer from the points of view of performance, availability and security. The suggestions will start covering how a single node GeoServer should be prepared for internet usage, tuning logging, connection pools, security, data and JVM preparation, keeping disk, memory and CPU usage in check within the limits of the available resources. We’ll then move to tools used to monitor the production instances, ranging from probes to request auditing and watch-dogs. Finally the presentation will cover setting up a cluster of server and the strategies for keeping them in synch, from the traditional multi-tier setup (testing vs production) to the systems that need to keep an ever evolving catalog of layers constantly on-line and in synch.
After the construction of several datalakes and large business intelligence pipelines, we now know that the use of Scala and its principles were essential to the success of those large undertakings.
In this talk, we will go through the 7 key scala-based architectures and methodologies that were used in real-life projects. More specifically, we will see the impact of these recipes on Spark performances, and how it enabled the rapid growth of those projects.
Slides on my presentation of a Particle Swarm approach to portfolio selection, that introduces a new particle initialization procedure based on dynamic systems' theory . Held in the series "Applied Micro Seminar -- Graduate School of Economics" at Kyoto University, December 2016.
Similar to Relational approach with LiDAR data with PostgreSQL (20)
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Relational approach with LiDAR data with PostgreSQL
1. 2ndQuadrant Italia Giuseppe Broccolo – giuseppe.broccolo@2ndquadrant.it
FOSS4G.NA 2016
Raleigh, NC
4th
May 2016
Manage LiDAR
data with
PostgreSQL
is it possible?
www.2ndquadrant.com
2. 2ndQuadrant Italia Giuseppe Broccolo – giuseppe.broccolo@2ndquadrant.it
FOSS4G.NA 2016
Raleigh, NC
4th
May 2016
Is the relational approach valid for LiDAR?Is the relational approach valid for LiDAR?
• The relational approach to the data:
– data organized in tuples
– tuples are part of tables
– tables are related to each other through constraints (PK, FK, etc.)
• If the number of tuples grows:
– indexes allow to reduce the complexity of a search to ~O(logN)...
– ...but they must be contained in RAM!
– OTHERWISE: the relational approach start to fail...
3. 2ndQuadrant Italia Giuseppe Broccolo – giuseppe.broccolo@2ndquadrant.it
FOSS4G.NA 2016
Raleigh, NC
4th
May 2016
PostgreSQL & LiDAR dataPostgreSQL & LiDAR data
• PostgreSQL is a relational DBMS with an extension for LiDAR data:
pg_pointcloud
• Part of the OpenGeo suite, completely compatible with PostGIS
– two new datatype: pcpoint, pcpatch (compressed set of points)
• N points (with all attributes from the survey) → 1 patch → 1 record
– compatible with PDAL drivers to import data directly from .las
32b 32b 32b 16b
https://github.com/pgpointcloud/pointcloud
4. 2ndQuadrant Italia Giuseppe Broccolo – giuseppe.broccolo@2ndquadrant.it
FOSS4G.NA 2016
Raleigh, NC
4th
May 2016
Relational approach to LiDAR data with PGRelational approach to LiDAR data with PG
• GiST indexing in PostGIS
• GiST performances:
– storage: 1TB RAID1, RAM 16GB, 8 CPU @3.3GHz,
PostgreSQL9.3
– index size ~O(table size)
– Index was used:
• up to ~300M points in bbox inclusion searches
• up to ~10M points in kNN searches
LiDAR size: ~O(109
÷1011
) → few % can be properly indexed!
http://www.slideshare.net/GiuseppeBroccolo/gbroccolo-foss4-geugeodbindex
5. 2ndQuadrant Italia Giuseppe Broccolo – giuseppe.broccolo@2ndquadrant.it
FOSS4G.NA 2016
Raleigh, NC
4th
May 2016
A new index in PG: Block Range INdexingA new index in PG: Block Range INdexing
8kB8kB8kB8kB
data must be physically sorted on disk!
index node → row
index node → block
less specific than GiST!
Really small!
8kB8kB8kB8kB
8kB8kB8kB8kB
8kB8kB8kB8kB
(S. Riggs, A. Herrera)
6. 2ndQuadrant Italia Giuseppe Broccolo – giuseppe.broccolo@2ndquadrant.it
FOSS4G.NA 2016
Raleigh, NC
4th
May 2016
BRIN support for PostGIS datatypesBRIN support for PostGIS datatypes
8kB8kB8kB8kB
G. Broccolo, J. Rouhaud, R. Dunklau
3rd
May, PGDay @ FOSS4G.NA
NO kNN SUPPORT!!
7. 2ndQuadrant Italia Giuseppe Broccolo – giuseppe.broccolo@2ndquadrant.it
FOSS4G.NA 2016
Raleigh, NC
4th
May 2016
The LiDAR dataset: the ahn2 projectThe LiDAR dataset: the ahn2 project
• 3D point cloud, coverage: almost the whole Netherlands
– EPSG: 28992, ~8 points/m2
• 1.6TB, ~250G points in ~560M patches (compression: ~10x)
– PDAL driver – filter.chipper
• available RAM: 16GB
• the point structure:
X Y Z scan LAS time RGB chipper
32b 32b 32b 40b 16b 64b 48b 32b
the “indexed” part (can be converted to PostGIS datatype)
(thanks to Tom Van Tilburg)
8. 2ndQuadrant Italia Giuseppe Broccolo – giuseppe.broccolo@2ndquadrant.it
FOSS4G.NA 2016
Raleigh, NC
4th
May 2016
Typical searches on ahn2 -Typical searches on ahn2 - x3d_viewerx3d_viewer
• Intersection with a polygon (PostGIS)
– the part that lasts longer – need indexes here!
• Patch “explosion” + NN sorting (pg_PointCloud+PostGIS)
• constrained Delaunay Triangulation (SFCGAL)
9. 2ndQuadrant Italia Giuseppe Broccolo – giuseppe.broccolo@2ndquadrant.it
FOSS4G.NA 2016
Raleigh, NC
4th
May 2016
All just in the DB...All just in the DB...
WITH patches AS (
SELECT patches FROM ahn2
WHERE patches && ST_GeomFromText(‘POLYGON(...)’)
), points AS (
SELECT ST_Explode(patches) AS points
FROM patches
), sorted_points AS (
SELECT points,
ST_DumpPoints(ST_GeomFromText(‘POLYGON(...)’))).geom AS poly_pt
FROM points ORDER BY points <#> poly_pt LIMIT 1;
), sel AS (
SELECT points FROM sorted_points
WHERE points && ST_GeomFromText(‘POLYGON(...)’)
)
SELECT ST_Dump(ST_Triangulate2DZ(ST_Collect(points))) FROM sel;
10. 2ndQuadrant Italia Giuseppe Broccolo – giuseppe.broccolo@2ndquadrant.it
FOSS4G.NA 2016
Raleigh, NC
4th
May 2016
...and with just one query!...and with just one query!
WITH patches AS (
SELECT patches FROM ahn2
WHERE patches && ST_GeomFromText(‘POLYGON(...)’)
), points AS (
SELECT ST_Explode(patches) AS points
FROM patches
), sorted_points AS (
SELECT points,
ST_DumpPoints(ST_GeomFromText(‘POLYGON(...)’))).geom AS poly_pt
FROM points ORDER BY points <#> poly_pt LIMIT 1;
), sel AS (
SELECT points FROM sorted_points
WHERE points && ST_GeomFromText(‘POLYGON(...)’)
)
SELECT ST_Dump(ST_Triangulate2DZ(ST_Collect(points))) FROM sel;
11. 2ndQuadrant Italia Giuseppe Broccolo – giuseppe.broccolo@2ndquadrant.it
FOSS4G.NA 2016
Raleigh, NC
4th
May 2016
patches && polygons - GiST performancepatches && polygons - GiST performance
GiST 2 d
GiST 26GB
index building
index size
polygon
size
timing
~O(m) ~40ms
~O(km) ~50s
~O(10km) hours
searches based on GiST
index not contained in
RAM anymore
(~5G points → ~3%)
12. 2ndQuadrant Italia Giuseppe Broccolo – giuseppe.broccolo@2ndquadrant.it
FOSS4G.NA 2016
Raleigh, NC
4th
May 2016
patches && polygons - BRIN performancepatches && polygons - BRIN performance
BRIN 4 h BRIN 15MB
index building index size
polygon
size
timing
~O(m) ~150s
searches based on BRIN
13. 2ndQuadrant Italia Giuseppe Broccolo – giuseppe.broccolo@2ndquadrant.it
FOSS4G.NA 2016
Raleigh, NC
4th
May 2016
patches && polygons - BRIN performancepatches && polygons - BRIN performance
BRIN 4 h BRIN 15MB
index building index size
polygon
size
timing
~O(m) ~150s
searches based on BRIN
How data was inserted...
LASLASLASLASLASLAS
chipper
chipper
chipper
PDAL driver
(6 parallel processes)
ahn2
14. 2ndQuadrant Italia Giuseppe Broccolo – giuseppe.broccolo@2ndquadrant.it
FOSS4G.NA 2016
Raleigh, NC
4th
May 2016
patches && polygons - BRIN performancepatches && polygons - BRIN performance
CREATE INDEX patch_geohash ON ahn2_subset
USING btree (ST_GeoHash(ST_Transform(Geometry(patch), 4326), 20));
CLUSTER ahn2_subset USING patch_geohash;
~150s → ~800ms [radius ~O(m)]
• x20 slower than GiST searches
• x200 faster than Seq searches
– (x1000 faster in ~O(100m) searches)
(http://geohash.org/)
15. 2ndQuadrant Italia Giuseppe Broccolo – giuseppe.broccolo@2ndquadrant.it
FOSS4G.NA 2016
Raleigh, NC
4th
May 2016
is the drop in performance acceptable?is the drop in performance acceptable?
BRIN searches = x20 GiST searches
= x10÷x100 GiST searches
= x10÷x100 GiST searches
patch explosion
+
NN sorting
constrained
triangulation
16. 2ndQuadrant Italia Giuseppe Broccolo – giuseppe.broccolo@2ndquadrant.it
FOSS4G.NA 2016
Raleigh, NC
4th
May 2016
is the drop in performance acceptable?is the drop in performance acceptable?
BRIN searches = x20 GiST searches
= x10÷x100 GiST searches
= x10÷x100 GiST searches
patch explosion
+
NN sorting
constrained
triangulation
17. 2ndQuadrant Italia Giuseppe Broccolo – giuseppe.broccolo@2ndquadrant.it
FOSS4G.NA 2016
Raleigh, NC
4th
May 2016
ConclusionsConclusions
Can be the relational approach to LiDAR data valid with
PostgreSQL?
– Yes!
• GiST indexes are fast, but can manage just a real small portion
of the dataset
• BRINs are quite slower, but generally do not represent a real
bottleneck
– Make sure that data has the same sequentiality of .las
18. 2ndQuadrant Italia Giuseppe Broccolo – giuseppe.broccolo@2ndquadrant.it
FOSS4G.NA 2016
Raleigh, NC
4th
May 2016
~$ whoami~$ whoami
Giuseppe Broccolo, PhDGiuseppe Broccolo, PhD
PostgreSQL & PostGIS consultantPostgreSQL & PostGIS consultant
@giubro gbroccolo7
giuseppe.broccolo@2ndquadrant.it
gbroccolo gemini__81