Big Data y geoposicionamiento
or
it’s bigger on the inside
Jorge López-Malla Matute
Senior Data Engineer
1. Presentation
2. What does Key Value means and why does it matters so
much?
3. Why do we need Geopositioning analytics?
4. How can we merge these two worlds?
5. Q&A
Index
Presentation
SKILLS
JORGE LÓPEZ-MALLA
@jorgelopezmalla
Arquitecto Big Data, certificado número
13 de Spark, riojano y miope.
Después de años tratando de solventar
problemas modernos con tecnologías
tradicionales lo intenté con el Big Data
y, ¡vi que lo resolvían!
What we do
Geoblink is the ultimate location
Intelligence solution that helps companies
of any size make strategic, location-related
decisions on an easy-to-use platform
COLLECTING
DATA
We combine our
client’s internal data
with external data and
Geoblink’s proprietary
location data
TRANSFORMING
DATA
We process and analyze
data using advanced
analytics (big data) and
artificial intelligence
techniques
PROVIDING
INSIGHTS
We present insights on a
user-friendly platform to
help companies make
powerful, data-driven
decisions
How we do it
What does “Key Value” mean
and why does it matters so
much?
● Big Data was born in the early 2000s
● Data is no longer small enough to fit in a single commodity
machine
● Data grows exponentially
● Vertical scaling is both dangerous and expensive
A little bit of history
● Solutions?
3G
1G
15G
15G
6G
12G
12G
12G
Processing & Storing
● Choosing a proper key is not only critical in a stored system but
also very important in distributed processing frameworks
● Spark, is probably the most important distributed processing
framework right now, is no exception
● Both important in streaming and batch processing
Why do we need Geopositioning
analytics?
The Five Ws are questions whose answers are considered basic in
information gathering or problem solving
● Who was involved?
● What happened?
● Why did that happen?
● When did it take place?
● Where did it take place?
Five W
● Digital society needs immediate reactions
● “Slows” responses are not useful anymore
● Big Data allows us to answer 4 of the 5 W questions
● Geospatial problem is not just an enterprise problem
The where matters
Real world
Business world
How can we merge these two worlds?
● Knowing both the problem to solve and technology should be
enough
● Obtaining the proper key is the “key” in every Big Data project
● In geospatial projects it is fundamental to obtain the results
exactly where we want
● Taking this in mind we should find the key to each record of our
dataset, easy … or not?
Merging worlds
It’s bigger on the inside
The real problem
● Remember: We should assign a key to a value using as few
logic as possible
● All geospatial logic must be understandable by humans
● The intuitive behaviour is to assign each point to a knowing
geospatial cardinality
The real problem
The real Problem
Intersection
● Each coordinate is not relevant by itself
● To assign each coordinate to a recognizable area we need both
geometries
● So we need to intersect the coordinates with the areas
Intersection
Intersection
● The intersect operation has a high computational cost
● We need to do this operation only in the cases that a
intersection is probable
● We need to find a key to reduce the operation cost
● First of all, there is no silver bullet
● The “key” problem is worse in the Geospatial world
● Both storing and processing technologies have similar problems
● Geospatial indexes help a lot
Finding a proper Key
Spatial partitioner
● Some Geospatial tech has been grouped by Eclipse in
locationtech
● Geospark and Magellan are spatial modules for Spark
● Although we only talk about Spark, other processing engines
have this functionality
● We have tested only processing engines but researched for
storage techs
Big Data initiatives
Processing engines
● Both Magellan and Geospark offer geospatial functionality
powered by Apache Spark
● Both allow us to use SparkSQL for Geospatial queries
● Both optimize the queries in Spark
● Geospark’s documentation is better than Magellan Spark
Processing engines
GeoSpark optimization
● Spatial joins allows us to assign several geometries to a
geometry
● Remember intersect operations came with a high cost
● In most use cases you only want a 1:1 mapping
● You can use Broadcast variables!
Do you really need a join?
Geomesa-Big Data storing
● Geomesa is an open-source project that allows performing
geospatial operations against several datasources and
processing engines
● Has connectors with visual tools (like Geoserver)
● We only tested Geomesa with Hbase and as a POC (yet
● We only have tested Geomesa as a POC
(U1 ,Madrid, Point(x1, y1))
(U2 ,Logroño, Point(x2, y2))
(U1 ,Cadiz, Point(x3, y3))
(U3 ,Logroño, Point(x4, y4))
Geomesa
(U1 ,Ávila, Point(x5, y5))
(U2 ,Huelva, Point(x6, y6))
(U3 ,Huelva, Point(x7, y7))
(U2 ,Logroño, Point(x8, y8))
HBase
Master
Spark Executor-1
Spark Executor-2
Point(x1, y1), [U1 ,Madrid]
Point(x5, y5), [U1 ,Ávila]
Point(x2, y2), [U21 ,Logroño]
Point(x4, y4), [U31 ,Logroño]
Point(x6, y6), [U1 ,Huelva]
Point(x7, y7), [U3 ,Huelva]
Point(x3, y3), [U1 ,Cadiz]
Point(x8, y8), [U21 ,Logroño]
Region Server-1
Region Server-2
Region Server-3
ECLQuery.toCQL(“people
between 1000, 200”)
Geomesa
HBase
Master
Client.java
Region Server-1
Region Server-2
Region Server-3
Point(x1, y1), [U1 ,Madrid]
Point(x5, y5), [U1 ,Ávila]
Point(x2, y2), [U21 ,Logroño]
Point(x4, y4), [U31 ,Logroño]
Point(x6, y6), [U1 ,Huelva]
Point(x7, y7), [U3 ,Huelva]
Point(x3, y3), [U1 ,Cadiz]
Point(x8, y8), [U21 ,Logroño]
Geomesa
HBase
Master
val dataFrame =
sparkSession.read
.format("geomesa")
.options(dsParams)
.option("geomesa.feat
ure", "spain")
.load()
Spark Driver
Region Server-1
Region Server-2
Region Server-3
Point(x1, y1), [U1 ,Madrid]
Point(x5, y5), [U1 ,Ávila]
Point(x2, y2), [U21 ,Logroño]
Point(x4, y4), [U31 ,Logroño]
Point(x6, y6), [U1 ,Huelva]
Point(x7, y7), [U3 ,Huelva]
Point(x3, y3), [U1 ,Cadiz]
Point(x8, y8), [U21 ,Logroño]
Takeaways
● We really need to give the insights in the proper location
● Big Data requires finding suitable key to our problem
● When dealing with big amount of data we have to aggregate it
● Spatial indexes are adecuate keys but they are not perfect
● If you only need to assign one geometry to another, a spatial
join is not a good idea
Q&A
Q&A
★ Job offers:
○ https://www.geoblink.com/work-with-us/
★ Contact:
○ jobs@geoblink.com
○ jlmalla@geoblink.com
Geoposicionamiento Big Data o It's bigger on the inside Commit conf 2018

Geoposicionamiento Big Data o It's bigger on the inside Commit conf 2018

  • 1.
    Big Data ygeoposicionamiento or it’s bigger on the inside Jorge López-Malla Matute Senior Data Engineer
  • 2.
    1. Presentation 2. Whatdoes Key Value means and why does it matters so much? 3. Why do we need Geopositioning analytics? 4. How can we merge these two worlds? 5. Q&A Index
  • 3.
  • 4.
    SKILLS JORGE LÓPEZ-MALLA @jorgelopezmalla Arquitecto BigData, certificado número 13 de Spark, riojano y miope. Después de años tratando de solventar problemas modernos con tecnologías tradicionales lo intenté con el Big Data y, ¡vi que lo resolvían!
  • 5.
    What we do Geoblinkis the ultimate location Intelligence solution that helps companies of any size make strategic, location-related decisions on an easy-to-use platform
  • 6.
    COLLECTING DATA We combine our client’sinternal data with external data and Geoblink’s proprietary location data TRANSFORMING DATA We process and analyze data using advanced analytics (big data) and artificial intelligence techniques PROVIDING INSIGHTS We present insights on a user-friendly platform to help companies make powerful, data-driven decisions How we do it
  • 7.
    What does “KeyValue” mean and why does it matters so much?
  • 8.
    ● Big Datawas born in the early 2000s ● Data is no longer small enough to fit in a single commodity machine ● Data grows exponentially ● Vertical scaling is both dangerous and expensive A little bit of history ● Solutions?
  • 12.
  • 13.
    Processing & Storing ●Choosing a proper key is not only critical in a stored system but also very important in distributed processing frameworks ● Spark, is probably the most important distributed processing framework right now, is no exception ● Both important in streaming and batch processing
  • 14.
    Why do weneed Geopositioning analytics?
  • 15.
    The Five Wsare questions whose answers are considered basic in information gathering or problem solving ● Who was involved? ● What happened? ● Why did that happen? ● When did it take place? ● Where did it take place? Five W
  • 16.
    ● Digital societyneeds immediate reactions ● “Slows” responses are not useful anymore ● Big Data allows us to answer 4 of the 5 W questions ● Geospatial problem is not just an enterprise problem The where matters
  • 17.
  • 18.
  • 19.
    How can wemerge these two worlds?
  • 20.
    ● Knowing boththe problem to solve and technology should be enough ● Obtaining the proper key is the “key” in every Big Data project ● In geospatial projects it is fundamental to obtain the results exactly where we want ● Taking this in mind we should find the key to each record of our dataset, easy … or not? Merging worlds
  • 21.
    It’s bigger onthe inside
  • 22.
  • 23.
    ● Remember: Weshould assign a key to a value using as few logic as possible ● All geospatial logic must be understandable by humans ● The intuitive behaviour is to assign each point to a knowing geospatial cardinality The real problem
  • 24.
  • 25.
    Intersection ● Each coordinateis not relevant by itself ● To assign each coordinate to a recognizable area we need both geometries ● So we need to intersect the coordinates with the areas
  • 26.
  • 27.
    Intersection ● The intersectoperation has a high computational cost ● We need to do this operation only in the cases that a intersection is probable ● We need to find a key to reduce the operation cost
  • 28.
    ● First ofall, there is no silver bullet ● The “key” problem is worse in the Geospatial world ● Both storing and processing technologies have similar problems ● Geospatial indexes help a lot Finding a proper Key
  • 29.
  • 30.
    ● Some Geospatialtech has been grouped by Eclipse in locationtech ● Geospark and Magellan are spatial modules for Spark ● Although we only talk about Spark, other processing engines have this functionality ● We have tested only processing engines but researched for storage techs Big Data initiatives
  • 31.
  • 32.
    ● Both Magellanand Geospark offer geospatial functionality powered by Apache Spark ● Both allow us to use SparkSQL for Geospatial queries ● Both optimize the queries in Spark ● Geospark’s documentation is better than Magellan Spark Processing engines
  • 33.
  • 34.
    ● Spatial joinsallows us to assign several geometries to a geometry ● Remember intersect operations came with a high cost ● In most use cases you only want a 1:1 mapping ● You can use Broadcast variables! Do you really need a join?
  • 35.
    Geomesa-Big Data storing ●Geomesa is an open-source project that allows performing geospatial operations against several datasources and processing engines ● Has connectors with visual tools (like Geoserver) ● We only tested Geomesa with Hbase and as a POC (yet ● We only have tested Geomesa as a POC
  • 36.
    (U1 ,Madrid, Point(x1,y1)) (U2 ,Logroño, Point(x2, y2)) (U1 ,Cadiz, Point(x3, y3)) (U3 ,Logroño, Point(x4, y4)) Geomesa (U1 ,Ávila, Point(x5, y5)) (U2 ,Huelva, Point(x6, y6)) (U3 ,Huelva, Point(x7, y7)) (U2 ,Logroño, Point(x8, y8)) HBase Master Spark Executor-1 Spark Executor-2 Point(x1, y1), [U1 ,Madrid] Point(x5, y5), [U1 ,Ávila] Point(x2, y2), [U21 ,Logroño] Point(x4, y4), [U31 ,Logroño] Point(x6, y6), [U1 ,Huelva] Point(x7, y7), [U3 ,Huelva] Point(x3, y3), [U1 ,Cadiz] Point(x8, y8), [U21 ,Logroño] Region Server-1 Region Server-2 Region Server-3
  • 37.
    ECLQuery.toCQL(“people between 1000, 200”) Geomesa HBase Master Client.java RegionServer-1 Region Server-2 Region Server-3 Point(x1, y1), [U1 ,Madrid] Point(x5, y5), [U1 ,Ávila] Point(x2, y2), [U21 ,Logroño] Point(x4, y4), [U31 ,Logroño] Point(x6, y6), [U1 ,Huelva] Point(x7, y7), [U3 ,Huelva] Point(x3, y3), [U1 ,Cadiz] Point(x8, y8), [U21 ,Logroño]
  • 38.
    Geomesa HBase Master val dataFrame = sparkSession.read .format("geomesa") .options(dsParams) .option("geomesa.feat ure","spain") .load() Spark Driver Region Server-1 Region Server-2 Region Server-3 Point(x1, y1), [U1 ,Madrid] Point(x5, y5), [U1 ,Ávila] Point(x2, y2), [U21 ,Logroño] Point(x4, y4), [U31 ,Logroño] Point(x6, y6), [U1 ,Huelva] Point(x7, y7), [U3 ,Huelva] Point(x3, y3), [U1 ,Cadiz] Point(x8, y8), [U21 ,Logroño]
  • 39.
    Takeaways ● We reallyneed to give the insights in the proper location ● Big Data requires finding suitable key to our problem ● When dealing with big amount of data we have to aggregate it ● Spatial indexes are adecuate keys but they are not perfect ● If you only need to assign one geometry to another, a spatial join is not a good idea
  • 40.
  • 41.
  • 42.
    ★ Job offers: ○https://www.geoblink.com/work-with-us/ ★ Contact: ○ jobs@geoblink.com ○ jlmalla@geoblink.com