SlideShare a Scribd company logo
1 of 29
Download to read offline
Pregel: A System
For Large Scale
Graph Processing
Grzegorz Malewicz, Matthew H. Austern, Aart
J. C. Bik, James C. Dehnert, Ilan Horn,
Naty Leiser, and Grzegorz Czajkowski
Presented By
Riyad Parvez
Real World Graph Processing
● Web graph:
○ PageRank (influential vertices)
● Social graph:
○ Popularity rank, personalized rank, shortest paths, shared
connections, clustering (communities), propagation
● Advertisement:
○ Target ads
● Communication network:
○ Maximum flow, transportation routes
● Biology network
○ protein interactions
● Pathology network
○ find anomalies
Graph Processing is Different
● Poor locality of memory access.
● Very little work done per vertex.
● Changes degree of parallelism over the
course of execution.
Why not MapReduce (MR)
● Graph algorithms can be expressed as
series of MR jobs.
● Data must be reloaded and reprocessed at
each iteration, wasting I/O, network.
bandwidth, and processor resources.
● Needs an extra MR job for each iteration just
to detect termination condition.
● MR isn’t very good at dynamic dependency
graph.
Pregel
● Developed at Google
● Modeled after Bulk Synchronous Parallel (BSP)
computing model
● Distributed message passing system
● Computes in vertex-centric fashion
○ “Think like a vertex”
● Scalable and fault tolerant
● Influenced systems like Apache Giraph and
other BSP distributed systems
Bulk Synchronous Parallel
● Leslie Valiant introduced in 1990.
● Computations are consist of a sequence of iterations,
called superstep.
● During superstep, framework calls user-defined
computation function on every vertex.
● Computation function specifies behaviour at a single
vertex V and a single superstep S.
● Supersteps end with barrier synchronization.
● All communications are from superstep S to superstep
S+1.
● Performance of the model is predictable.
Bulk Synchronous Parallel
Source: https://cs.uwaterloo.ca/~kdaudjee/courses/cs848/slides/jenny.pdf
Bulk Synchronous Parallel
Source: https://cs.uwaterloo.ca/~kdaudjee/courses/cs848/slides/jenny.pdf
Pregel Computation Model
● Computation on locally stored data.
● Computations are in-memory.
● Terminates when all vertices are inactive or no
messages to be delivered.
● Vertices are distributed among workers using hash(ID)
mod N, where N is the number of partitions (default
partitioning)
● Barrier synchronization
➢ Wait and synchronize before the end of superstep
➢ Fast processors can be delayed by slow ones
● Persistent data is stored on a distributed storage system
(GFS/BigTable)
● Temporary data is stored in disk.
C++ API
Source: http://kowshik.github.io/JPregel/pregel_paper.pdf
Vertex
● Can mutate local value and value on outgoing edges.
● Can send arbitrary number of messages to any other
vertices.
● Receive messages from previous superstep.
● Can mutate local graph topology.
● All active vertices participate in the computation in a
superstep.
Vertex State Machine
● Initially, every vertices
are active.
● A vertice can deactivate
itself by vote to halt.
● Deactivated vertices
don't participate in
computation.
● Vertices are reactivated
upon receiving
message.
Example
Messages
● Consists of a message value and destination
vertex.
● Typically sent along outgoing edges.
● Can be sent to any vertex whose identifier is
known.
● Are only available to receiver at the beginning of
superstep.
● Guaranteed to be delivered.
● Guaranteed not to be duplicated.
● Can be out of order.
Combiner
● Sending messages incurs overhead.
● System calls Combine() for several messages intended
for a vertex V into a single message containing the
combined message.
● No guarantees which messages will be combined or the
order of combination.
● Should be enabled for commutative and associative
messages.
● Not enabled by default.
Combiner
Source: https://wiki.engr.illinois.edu/download/attachments/188588798/pregel.pdf?version=1
Aggregator
● Mechanism for global communication,
monitoring and global state.
○ Vertices provide value to aggregator in superstep S.
○ Values are combined using a reduction operator.
○ Resulting value is available to all vertices at
superstep S+1.
● New aggregator is defined by subclassing
"Aggregator" class.
● Reduction operator should be associative and
commutative.
Reduction (Aggregator)
Source: https://wiki.engr.illinois.edu/download/attachments/188588798/pregel.pdf?version=1
Topology Mutation
● Vertices can dynamically create/destroy vertices, edges.
● Mutations and conflict resolution take place at barrier.
● Except local mutation (self-edge) immediately takes
place.
● Order of mutations
○ Edge deletion
○ Vertex deletion
○ Vertex addition
○ Edge addition
Master
● Partitions the input and assigns one or more partitions
to each worker.
● Keeps list of
○ All alive workers
○ Worker's unique identifiers
○ Addressing informations
○ Partition of the graph is assigned to the worker.
● Coordinates barrier synchronization i.e., superstep.
● Fault tolerance by checkpoint, failure detection and
reassignment.
● Maintains statistics of the progress of computation and
the state of the graph.
● Doesn’t participate in computation.
● Not responsible for load-balancing.
Worker
● Responsible for computation of assigned
vertices.
● Keeps two copies of active vertices and
incoming messages
○ Current superstep
○ Next superstep
● Place local messages immediately in
message queue.
● Buffer remote messages.
○ Flush asynchronously in single message if threshold
reached.
Fault Tolerance
● Checkpoint at the beginning of superstep.
○ Master saves aggregators.
○ Workers save vertices, edges and incoming messages.
● Worker failure detected by ping messages.
● Recovery
○ Master reassigns failed worker partition to other available
workers.
○ All workers restart from superstep S by loading state from the
most recently available checkpoint.
● Confined recovery: recovery is only confined to lost
partitions
○ Workers also save outgoing messages.
○ Recomputes using logged messages from healthy partitions
and recalculated ones from recovering partitions.
PageRank
Source: http://kowshik.github.io/JPregel/pregel_paper.pdf
Performance
Experimental Setup:
● Hardware: A cluster of 300 multi-core commodity PCs.
● Algorithm: SSSP with unit weight edges.
• All-pairs shortest paths impractical b/c O(|V|2) storage.
● Measures scalability w.r.t. both the number of workers
and the number of vertices.
● Data collected for:
• Binary trees (to test scalability).
• log-normal random graphs (to study performance in a realistic
setting).
● No checkpointing.
Performance
Source: http://kowshik.github.io/JPregel/pregel_paper.pdf
Performance
Source: http://kowshik.github.io/JPregel/pregel_paper.pdf
Performance
Source: http://kowshik.github.io/JPregel/pregel_paper.pdf
Summary
● Distributed system for large scale graph
processing.
● Vertex-centric BSP model
○ Message passing API
○ A sequence of supersteps
○ Barrier synchronization
● Coarse grained parallelism
● Fault tolerance by checkpointing
● Runtime performance scales near linearly to the
size of the graph (CPU bound)
Discussion
● No fault tolerance for master is mentioned in
the paper (Probably Paxos or replication).
● Static partitioning! What happens if a worker
is too slow?
● Dynamic partitioning, network overhead for
reassigning vertices and state.
● Good for sparse graph. But communication
overhead for dense graph can bring the
system down to knees.

More Related Content

What's hot

Radical Speed for SQL Queries on Databricks: Photon Under the Hood
Radical Speed for SQL Queries on Databricks: Photon Under the HoodRadical Speed for SQL Queries on Databricks: Photon Under the Hood
Radical Speed for SQL Queries on Databricks: Photon Under the Hood
Databricks
 

What's hot (20)

Intro to Apache Spark
Intro to Apache SparkIntro to Apache Spark
Intro to Apache Spark
 
Spark
SparkSpark
Spark
 
Apache spark
Apache sparkApache spark
Apache spark
 
Spark streaming
Spark streamingSpark streaming
Spark streaming
 
Introduction to PySpark
Introduction to PySparkIntroduction to PySpark
Introduction to PySpark
 
Apache Spark 101
Apache Spark 101Apache Spark 101
Apache Spark 101
 
Airbyte @ Airflow Summit - The new modern data stack
Airbyte @ Airflow Summit - The new modern data stackAirbyte @ Airflow Summit - The new modern data stack
Airbyte @ Airflow Summit - The new modern data stack
 
Intro to Airflow: Goodbye Cron, Welcome scheduled workflow management
Intro to Airflow: Goodbye Cron, Welcome scheduled workflow managementIntro to Airflow: Goodbye Cron, Welcome scheduled workflow management
Intro to Airflow: Goodbye Cron, Welcome scheduled workflow management
 
Approximate Queries and Graph Streams on Apache Flink - Theodore Vasiloudis -...
Approximate Queries and Graph Streams on Apache Flink - Theodore Vasiloudis -...Approximate Queries and Graph Streams on Apache Flink - Theodore Vasiloudis -...
Approximate Queries and Graph Streams on Apache Flink - Theodore Vasiloudis -...
 
Spark SQL
Spark SQLSpark SQL
Spark SQL
 
Real-time Hadoop: The Ideal Messaging System for Hadoop
Real-time Hadoop: The Ideal Messaging System for Hadoop Real-time Hadoop: The Ideal Messaging System for Hadoop
Real-time Hadoop: The Ideal Messaging System for Hadoop
 
How Dashtable Helps Dragonfly Maintain Low Latency
How Dashtable Helps Dragonfly Maintain Low LatencyHow Dashtable Helps Dragonfly Maintain Low Latency
How Dashtable Helps Dragonfly Maintain Low Latency
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
 
Schemaless Databases
Schemaless DatabasesSchemaless Databases
Schemaless Databases
 
Airflow Intro-1.pdf
Airflow Intro-1.pdfAirflow Intro-1.pdf
Airflow Intro-1.pdf
 
Radical Speed for SQL Queries on Databricks: Photon Under the Hood
Radical Speed for SQL Queries on Databricks: Photon Under the HoodRadical Speed for SQL Queries on Databricks: Photon Under the Hood
Radical Speed for SQL Queries on Databricks: Photon Under the Hood
 
High-speed Database Throughput Using Apache Arrow Flight SQL
High-speed Database Throughput Using Apache Arrow Flight SQLHigh-speed Database Throughput Using Apache Arrow Flight SQL
High-speed Database Throughput Using Apache Arrow Flight SQL
 
Airflow Best Practises & Roadmap to Airflow 2.0
Airflow Best Practises & Roadmap to Airflow 2.0Airflow Best Practises & Roadmap to Airflow 2.0
Airflow Best Practises & Roadmap to Airflow 2.0
 
Apache Spark - Basics of RDD | Big Data Hadoop Spark Tutorial | CloudxLab
Apache Spark - Basics of RDD | Big Data Hadoop Spark Tutorial | CloudxLabApache Spark - Basics of RDD | Big Data Hadoop Spark Tutorial | CloudxLab
Apache Spark - Basics of RDD | Big Data Hadoop Spark Tutorial | CloudxLab
 
Map Reduce
Map ReduceMap Reduce
Map Reduce
 

Similar to Pregel: A System For Large Scale Graph Processing

Similar to Pregel: A System For Large Scale Graph Processing (20)

Apache Giraph
Apache GiraphApache Giraph
Apache Giraph
 
Cassandra - A Decentralized Structured Storage System
Cassandra - A Decentralized Structured Storage SystemCassandra - A Decentralized Structured Storage System
Cassandra - A Decentralized Structured Storage System
 
Distributed implementation of a lstm on spark and tensorflow
Distributed implementation of a lstm on spark and tensorflowDistributed implementation of a lstm on spark and tensorflow
Distributed implementation of a lstm on spark and tensorflow
 
Taskerman - a distributed cluster task manager
Taskerman - a distributed cluster task managerTaskerman - a distributed cluster task manager
Taskerman - a distributed cluster task manager
 
Netflix machine learning
Netflix machine learningNetflix machine learning
Netflix machine learning
 
Advanced memory allocation
Advanced memory allocationAdvanced memory allocation
Advanced memory allocation
 
Graph processing
Graph processingGraph processing
Graph processing
 
Ledingkart Meetup #2: Scaling Search @Lendingkart
Ledingkart Meetup #2: Scaling Search @LendingkartLedingkart Meetup #2: Scaling Search @Lendingkart
Ledingkart Meetup #2: Scaling Search @Lendingkart
 
Reactive by example - at Reversim Summit 2015
Reactive by example - at Reversim Summit 2015Reactive by example - at Reversim Summit 2015
Reactive by example - at Reversim Summit 2015
 
Apache spark - Spark's distributed programming model
Apache spark - Spark's distributed programming modelApache spark - Spark's distributed programming model
Apache spark - Spark's distributed programming model
 
Software Design Practices for Large-Scale Automation
Software Design Practices for Large-Scale AutomationSoftware Design Practices for Large-Scale Automation
Software Design Practices for Large-Scale Automation
 
Computer Graphics - Lecture 01 - 3D Programming I
Computer Graphics - Lecture 01 - 3D Programming IComputer Graphics - Lecture 01 - 3D Programming I
Computer Graphics - Lecture 01 - 3D Programming I
 
Concurrency - Why it's hard ?
Concurrency - Why it's hard ?Concurrency - Why it's hard ?
Concurrency - Why it's hard ?
 
Scalability broad strokes
Scalability   broad strokesScalability   broad strokes
Scalability broad strokes
 
Greenplum Overview for Postgres Hackers - Greenplum Summit 2018
Greenplum Overview for Postgres Hackers - Greenplum Summit 2018Greenplum Overview for Postgres Hackers - Greenplum Summit 2018
Greenplum Overview for Postgres Hackers - Greenplum Summit 2018
 
BlaBlaCar Elastic Search Feedback
BlaBlaCar Elastic Search FeedbackBlaBlaCar Elastic Search Feedback
BlaBlaCar Elastic Search Feedback
 
Set Transfomer: A Framework for Attention-based Permutaion-Invariant Neural N...
Set Transfomer: A Framework for Attention-based Permutaion-Invariant Neural N...Set Transfomer: A Framework for Attention-based Permutaion-Invariant Neural N...
Set Transfomer: A Framework for Attention-based Permutaion-Invariant Neural N...
 
Why Concurrency is hard ?
Why Concurrency is hard ?Why Concurrency is hard ?
Why Concurrency is hard ?
 
FrackingPaper
FrackingPaperFrackingPaper
FrackingPaper
 
[Paper reading] Interleaving with Coroutines: A Practical Approach for Robust...
[Paper reading] Interleaving with Coroutines: A Practical Approach for Robust...[Paper reading] Interleaving with Coroutines: A Practical Approach for Robust...
[Paper reading] Interleaving with Coroutines: A Practical Approach for Robust...
 

Recently uploaded

Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 

Recently uploaded (20)

FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 

Pregel: A System For Large Scale Graph Processing

  • 1. Pregel: A System For Large Scale Graph Processing Grzegorz Malewicz, Matthew H. Austern, Aart J. C. Bik, James C. Dehnert, Ilan Horn, Naty Leiser, and Grzegorz Czajkowski Presented By Riyad Parvez
  • 2. Real World Graph Processing ● Web graph: ○ PageRank (influential vertices) ● Social graph: ○ Popularity rank, personalized rank, shortest paths, shared connections, clustering (communities), propagation ● Advertisement: ○ Target ads ● Communication network: ○ Maximum flow, transportation routes ● Biology network ○ protein interactions ● Pathology network ○ find anomalies
  • 3. Graph Processing is Different ● Poor locality of memory access. ● Very little work done per vertex. ● Changes degree of parallelism over the course of execution.
  • 4. Why not MapReduce (MR) ● Graph algorithms can be expressed as series of MR jobs. ● Data must be reloaded and reprocessed at each iteration, wasting I/O, network. bandwidth, and processor resources. ● Needs an extra MR job for each iteration just to detect termination condition. ● MR isn’t very good at dynamic dependency graph.
  • 5. Pregel ● Developed at Google ● Modeled after Bulk Synchronous Parallel (BSP) computing model ● Distributed message passing system ● Computes in vertex-centric fashion ○ “Think like a vertex” ● Scalable and fault tolerant ● Influenced systems like Apache Giraph and other BSP distributed systems
  • 6. Bulk Synchronous Parallel ● Leslie Valiant introduced in 1990. ● Computations are consist of a sequence of iterations, called superstep. ● During superstep, framework calls user-defined computation function on every vertex. ● Computation function specifies behaviour at a single vertex V and a single superstep S. ● Supersteps end with barrier synchronization. ● All communications are from superstep S to superstep S+1. ● Performance of the model is predictable.
  • 7. Bulk Synchronous Parallel Source: https://cs.uwaterloo.ca/~kdaudjee/courses/cs848/slides/jenny.pdf
  • 8. Bulk Synchronous Parallel Source: https://cs.uwaterloo.ca/~kdaudjee/courses/cs848/slides/jenny.pdf
  • 9. Pregel Computation Model ● Computation on locally stored data. ● Computations are in-memory. ● Terminates when all vertices are inactive or no messages to be delivered. ● Vertices are distributed among workers using hash(ID) mod N, where N is the number of partitions (default partitioning) ● Barrier synchronization ➢ Wait and synchronize before the end of superstep ➢ Fast processors can be delayed by slow ones ● Persistent data is stored on a distributed storage system (GFS/BigTable) ● Temporary data is stored in disk.
  • 11. Vertex ● Can mutate local value and value on outgoing edges. ● Can send arbitrary number of messages to any other vertices. ● Receive messages from previous superstep. ● Can mutate local graph topology. ● All active vertices participate in the computation in a superstep.
  • 12. Vertex State Machine ● Initially, every vertices are active. ● A vertice can deactivate itself by vote to halt. ● Deactivated vertices don't participate in computation. ● Vertices are reactivated upon receiving message.
  • 14. Messages ● Consists of a message value and destination vertex. ● Typically sent along outgoing edges. ● Can be sent to any vertex whose identifier is known. ● Are only available to receiver at the beginning of superstep. ● Guaranteed to be delivered. ● Guaranteed not to be duplicated. ● Can be out of order.
  • 15. Combiner ● Sending messages incurs overhead. ● System calls Combine() for several messages intended for a vertex V into a single message containing the combined message. ● No guarantees which messages will be combined or the order of combination. ● Should be enabled for commutative and associative messages. ● Not enabled by default.
  • 17. Aggregator ● Mechanism for global communication, monitoring and global state. ○ Vertices provide value to aggregator in superstep S. ○ Values are combined using a reduction operator. ○ Resulting value is available to all vertices at superstep S+1. ● New aggregator is defined by subclassing "Aggregator" class. ● Reduction operator should be associative and commutative.
  • 19. Topology Mutation ● Vertices can dynamically create/destroy vertices, edges. ● Mutations and conflict resolution take place at barrier. ● Except local mutation (self-edge) immediately takes place. ● Order of mutations ○ Edge deletion ○ Vertex deletion ○ Vertex addition ○ Edge addition
  • 20. Master ● Partitions the input and assigns one or more partitions to each worker. ● Keeps list of ○ All alive workers ○ Worker's unique identifiers ○ Addressing informations ○ Partition of the graph is assigned to the worker. ● Coordinates barrier synchronization i.e., superstep. ● Fault tolerance by checkpoint, failure detection and reassignment. ● Maintains statistics of the progress of computation and the state of the graph. ● Doesn’t participate in computation. ● Not responsible for load-balancing.
  • 21. Worker ● Responsible for computation of assigned vertices. ● Keeps two copies of active vertices and incoming messages ○ Current superstep ○ Next superstep ● Place local messages immediately in message queue. ● Buffer remote messages. ○ Flush asynchronously in single message if threshold reached.
  • 22. Fault Tolerance ● Checkpoint at the beginning of superstep. ○ Master saves aggregators. ○ Workers save vertices, edges and incoming messages. ● Worker failure detected by ping messages. ● Recovery ○ Master reassigns failed worker partition to other available workers. ○ All workers restart from superstep S by loading state from the most recently available checkpoint. ● Confined recovery: recovery is only confined to lost partitions ○ Workers also save outgoing messages. ○ Recomputes using logged messages from healthy partitions and recalculated ones from recovering partitions.
  • 24. Performance Experimental Setup: ● Hardware: A cluster of 300 multi-core commodity PCs. ● Algorithm: SSSP with unit weight edges. • All-pairs shortest paths impractical b/c O(|V|2) storage. ● Measures scalability w.r.t. both the number of workers and the number of vertices. ● Data collected for: • Binary trees (to test scalability). • log-normal random graphs (to study performance in a realistic setting). ● No checkpointing.
  • 28. Summary ● Distributed system for large scale graph processing. ● Vertex-centric BSP model ○ Message passing API ○ A sequence of supersteps ○ Barrier synchronization ● Coarse grained parallelism ● Fault tolerance by checkpointing ● Runtime performance scales near linearly to the size of the graph (CPU bound)
  • 29. Discussion ● No fault tolerance for master is mentioned in the paper (Probably Paxos or replication). ● Static partitioning! What happens if a worker is too slow? ● Dynamic partitioning, network overhead for reassigning vertices and state. ● Good for sparse graph. But communication overhead for dense graph can bring the system down to knees.