Konstantinos Christofilos
OK. What is the problem ?
In order to get data from each service you
have
to speak its language (API)
What can we do about that ?
We can create a repository of mixed services data and query that to
produce more complex results
How are we going to do that ?
How are we going to do that ?
Step 1 – Generate endpoints
How are we going to do that ?
Step 2 – Load data
Step 1 – Generate endpoints
Importer interface
Example (Facebook page):
The command that generates endpoints takes as input a text file with
each name in a single line
Step 2 – Load data
Input
Output
Step 2 – Load data (Input)
Example (Facebook page)
Step 2 – Load data (Output)
Neo4j
Apache Jena
Example
A simple example running three names: Kostas Christofilos, Iannis Kotidis, Vasilis Spiropoulos
Example
Neo4j database size: 240 KB
Apache Jena database size: 75.7 KB
Scaling ?
In order to be easily scaled, the application is designed to handle
Inputs and Outputs as APIs.
That approach gives the ability for a horizontal scale.
Process
Gather Data
• We took the names of world’s greatest leaders from Fortune magazine
http://fortune.com/worlds-greatest-leaders/
• Application queried the APIs for accounts that are related to these
names
• It build a list of endpoints for those names
• Data was fetched from those endpoints
• Entities were recognized
• Data was saved into two different databases (Apache Jena, Neo4j)
Process
The names that were used are the following:
Jeff Bezos, Angela Merkel, Aung San Suu Kyi, Pope Francis, Tim Cook,
John Legend, Christina Figueres, Paul Ryan, Ruth Bader Ginsburg,
Sheikh Hasina, Nick Saban, Huateng "Pony" Ma, Sergio Moro, Bono,
Stephen Curry, Steve Kerr, Bryan Stevenson, Nikki Haley, Lin-Manuel
Miranda, Marvin Ellison, Reshma Saujani, Larry Fink, Scott Kelly,
Mikhail Kornienko, David Miliband, Anna Maria Chavez, Carla Hayden,
Maurizio Macri, Alicia Garza, Patrisse Cullors, Opal Tometi, Chai Jing,
Moncef Slaoui, John Oliver, Marc Edwards, Arthur Brooks, Rosie Batty,
Kristen Griest, Shaye Haver, Denis Mukwege, Christine Lagarde, Marc
Benioff, Gina Raimondo, Amina Mohammed, Domenico Lucano, Melinda
Gates, Susan Desmond-Hellman, Arvind Kejriwal, Jorge Ramos, Michael
Froman, Mina Guli, Ramon Mendez, Bright Simons, Justin Trudeau,
Clare Rewcastle Brown, Tshering Tobgay
Process
The list of the previous names after it was parsed from the application
produced a list of more than 11.000 endpoints in Facebook, Instagram
and Twitter with the following distribution.
Names Facebook
Page
Endpoints
Twitter
Endpoints
Instagram
Endpoints
Total
Endpoints
56 437 866 9903 11206
Process
The parse of those endpoints in a single workstation* took about 14
days for the period 2016-06-04 to 2016-06-18.
Most of the time was consumed in the entity recognition process
*Workstation specs: AMD FX 2-Core CPU, 4GB RAM, 120GB SSD, Linux
OS with 5 concurrent processes of the application running
Results
The application run a single pass over the generated endpoints witch
took about 14 days in a single workstation and the generated nodes
were 126,395.
Endpoints Machines Generated
Nodes
Time Average
nodes/day/machi
ne
11206 1 126395 14 days 9028
Results
60
65
70
75
80
85
90
95
100
Neo4j Apache Jena
Import (%)
NER (%)
Data import time distribution (Empty database)
Results
80
82
84
86
88
90
92
94
96
98
100
Neo4j Apache Jena
Import (%)
NER (%)
Data import time distribution (100,000+ nodes)
Results
0 5000 10000 15000 20000 25000
Jesus
Lebron
Stephen
Steph
Curry
Persons mentions
Person Mentions
Results
0 1000 2000 3000 4000 5000 6000 7000 8000
Padre
GSW
Santo
Cavs
NBA
Organization Mentions
Organization Mentions
Results
0 500 1000 1500 2000 2500 3000 3500 4000
Jordan
America
Hermosa
Venezuela
Cleveland
Location Mentions
Location Mentions
Conclusion
• Data are generated in vast amounts every moment.
• We created an approach of linking heterogeneous data in a single
repository.
• Generated data can be analyzed and produce combined results.
• Patterns can be identified from that repository.
Future extensions
• New Inputs can be implemented (new APIs)
• New Outputs can be implemented (new storage engines)
• Name list can be saved in a database and accessed from all nodes. Now
it’s a local file
• Queue for entity recognizer can be implemented and remove the
blocking code
• Centralized logging for cluster monitoring. Now log goes to STDOUT
REST APIs
• REST stands for Representational State Transfer
• REST ignores the details of component implementation
• REST is an architecture that enables applications to communicate
without knowing the underlying technology
• REST APIs promote easy reusability
REST APIS
Graph Databases
• Graph database is a database type that uses graph structures with
nodes, edges and properties to represent and store data.
• Graph databases are based on graph theory and employs nodes, edges
and properties
• Nodes represent entities such as people, businesses, accounts, or any
other item you might want to keep track of
• Edges, also known as graphs or relationships, are the lines that connect
nodes to other nodes and represent relationship between them
Graph Databases
Graph Databases
Resource Description Framework (RDF)
• RDF is a standard model for data interchange on the Web and was
specified by W3C
• Web is a graph, created by nodes, edges and relations
Graph Databases
Property Graphs
• Property Graph databases are graph databases that contains connected
entities, which can cold, any number of attributes
• Nodes can be tagged with labels representing their different roles
• Labels may also serve to attach metadata to certain nodes
Graph Databases
Property Graphs
Natural Language Processing (NLP)
• Natural language processing (NLP) is an area of research and
application that explores how computers can be used to understand
and manipulate natural language text or speech to do useful things
• NLP lie in a number of disciplines, information sciences, linguistics,
mathematics, electrical and electronic engineering, artificial intelligence
and robotics, psychology, etc
• Applications of NLP include a number of fields of studies, such as
machine translation, natural language text processing and
summarization, user interfaces, multilingual and cross language
information retrieval, speech recognition, artificial intelligence and
expert systems
Natural Language Processing (NLP)
Named Entity Recognition (NER)
Named entity recognition (NER) is a subtask of information extraction
that seeks to locate and classify named entities in text into pre-defined
categories such as the names of persons, organizations, locations,
expressions of time etc. NER systems use linguistic grammar-based
techniques as well as statistical models, i.e. machine learning.
We used Stanford Named Entity Recognizer that was created by The
Stanford Natural Language Processing Group.
http://nlp.stanford.edu/software/CRF-NER.shtml
Natural Language Processing (NLP)
Named Entity Recognition (NER)

Repository for data crawled from multiple social networks

  • 1.
  • 2.
    OK. What isthe problem ? In order to get data from each service you have to speak its language (API)
  • 3.
    What can wedo about that ? We can create a repository of mixed services data and query that to produce more complex results
  • 4.
    How are wegoing to do that ?
  • 5.
    How are wegoing to do that ? Step 1 – Generate endpoints
  • 6.
    How are wegoing to do that ? Step 2 – Load data
  • 7.
    Step 1 –Generate endpoints Importer interface Example (Facebook page): The command that generates endpoints takes as input a text file with each name in a single line
  • 8.
    Step 2 –Load data Input Output
  • 9.
    Step 2 –Load data (Input) Example (Facebook page)
  • 10.
    Step 2 –Load data (Output) Neo4j Apache Jena
  • 11.
    Example A simple examplerunning three names: Kostas Christofilos, Iannis Kotidis, Vasilis Spiropoulos
  • 12.
    Example Neo4j database size:240 KB Apache Jena database size: 75.7 KB
  • 13.
    Scaling ? In orderto be easily scaled, the application is designed to handle Inputs and Outputs as APIs. That approach gives the ability for a horizontal scale.
  • 14.
    Process Gather Data • Wetook the names of world’s greatest leaders from Fortune magazine http://fortune.com/worlds-greatest-leaders/ • Application queried the APIs for accounts that are related to these names • It build a list of endpoints for those names • Data was fetched from those endpoints • Entities were recognized • Data was saved into two different databases (Apache Jena, Neo4j)
  • 15.
    Process The names thatwere used are the following: Jeff Bezos, Angela Merkel, Aung San Suu Kyi, Pope Francis, Tim Cook, John Legend, Christina Figueres, Paul Ryan, Ruth Bader Ginsburg, Sheikh Hasina, Nick Saban, Huateng "Pony" Ma, Sergio Moro, Bono, Stephen Curry, Steve Kerr, Bryan Stevenson, Nikki Haley, Lin-Manuel Miranda, Marvin Ellison, Reshma Saujani, Larry Fink, Scott Kelly, Mikhail Kornienko, David Miliband, Anna Maria Chavez, Carla Hayden, Maurizio Macri, Alicia Garza, Patrisse Cullors, Opal Tometi, Chai Jing, Moncef Slaoui, John Oliver, Marc Edwards, Arthur Brooks, Rosie Batty, Kristen Griest, Shaye Haver, Denis Mukwege, Christine Lagarde, Marc Benioff, Gina Raimondo, Amina Mohammed, Domenico Lucano, Melinda Gates, Susan Desmond-Hellman, Arvind Kejriwal, Jorge Ramos, Michael Froman, Mina Guli, Ramon Mendez, Bright Simons, Justin Trudeau, Clare Rewcastle Brown, Tshering Tobgay
  • 16.
    Process The list ofthe previous names after it was parsed from the application produced a list of more than 11.000 endpoints in Facebook, Instagram and Twitter with the following distribution. Names Facebook Page Endpoints Twitter Endpoints Instagram Endpoints Total Endpoints 56 437 866 9903 11206
  • 17.
    Process The parse ofthose endpoints in a single workstation* took about 14 days for the period 2016-06-04 to 2016-06-18. Most of the time was consumed in the entity recognition process *Workstation specs: AMD FX 2-Core CPU, 4GB RAM, 120GB SSD, Linux OS with 5 concurrent processes of the application running
  • 18.
    Results The application runa single pass over the generated endpoints witch took about 14 days in a single workstation and the generated nodes were 126,395. Endpoints Machines Generated Nodes Time Average nodes/day/machi ne 11206 1 126395 14 days 9028
  • 19.
    Results 60 65 70 75 80 85 90 95 100 Neo4j Apache Jena Import(%) NER (%) Data import time distribution (Empty database)
  • 20.
    Results 80 82 84 86 88 90 92 94 96 98 100 Neo4j Apache Jena Import(%) NER (%) Data import time distribution (100,000+ nodes)
  • 21.
    Results 0 5000 1000015000 20000 25000 Jesus Lebron Stephen Steph Curry Persons mentions Person Mentions
  • 22.
    Results 0 1000 20003000 4000 5000 6000 7000 8000 Padre GSW Santo Cavs NBA Organization Mentions Organization Mentions
  • 23.
    Results 0 500 10001500 2000 2500 3000 3500 4000 Jordan America Hermosa Venezuela Cleveland Location Mentions Location Mentions
  • 24.
    Conclusion • Data aregenerated in vast amounts every moment. • We created an approach of linking heterogeneous data in a single repository. • Generated data can be analyzed and produce combined results. • Patterns can be identified from that repository.
  • 25.
    Future extensions • NewInputs can be implemented (new APIs) • New Outputs can be implemented (new storage engines) • Name list can be saved in a database and accessed from all nodes. Now it’s a local file • Queue for entity recognizer can be implemented and remove the blocking code • Centralized logging for cluster monitoring. Now log goes to STDOUT
  • 26.
    REST APIs • RESTstands for Representational State Transfer • REST ignores the details of component implementation • REST is an architecture that enables applications to communicate without knowing the underlying technology • REST APIs promote easy reusability
  • 27.
  • 28.
    Graph Databases • Graphdatabase is a database type that uses graph structures with nodes, edges and properties to represent and store data. • Graph databases are based on graph theory and employs nodes, edges and properties • Nodes represent entities such as people, businesses, accounts, or any other item you might want to keep track of • Edges, also known as graphs or relationships, are the lines that connect nodes to other nodes and represent relationship between them
  • 29.
  • 30.
    Graph Databases Resource DescriptionFramework (RDF) • RDF is a standard model for data interchange on the Web and was specified by W3C • Web is a graph, created by nodes, edges and relations
  • 31.
    Graph Databases Property Graphs •Property Graph databases are graph databases that contains connected entities, which can cold, any number of attributes • Nodes can be tagged with labels representing their different roles • Labels may also serve to attach metadata to certain nodes
  • 32.
  • 33.
    Natural Language Processing(NLP) • Natural language processing (NLP) is an area of research and application that explores how computers can be used to understand and manipulate natural language text or speech to do useful things • NLP lie in a number of disciplines, information sciences, linguistics, mathematics, electrical and electronic engineering, artificial intelligence and robotics, psychology, etc • Applications of NLP include a number of fields of studies, such as machine translation, natural language text processing and summarization, user interfaces, multilingual and cross language information retrieval, speech recognition, artificial intelligence and expert systems
  • 34.
    Natural Language Processing(NLP) Named Entity Recognition (NER) Named entity recognition (NER) is a subtask of information extraction that seeks to locate and classify named entities in text into pre-defined categories such as the names of persons, organizations, locations, expressions of time etc. NER systems use linguistic grammar-based techniques as well as statistical models, i.e. machine learning. We used Stanford Named Entity Recognizer that was created by The Stanford Natural Language Processing Group. http://nlp.stanford.edu/software/CRF-NER.shtml
  • 35.
    Natural Language Processing(NLP) Named Entity Recognition (NER)