A Distributed Graph-Processing Library
Ahmet Emre Aladağ - AGMLab
● Library for large-scale graph processing.
● Runs on Apache Hadoop with Map Jobs
● Bulk Synchronous Parallel (BSP) model
What is Giraph?
● PageRank-variant iterative algorithms
● Graph clustering
○ Label propagation
○ Max Clique
○ Triangle Closure
○ Finding related people, groups, interests.
○ Single source, s-t, all to all
● Finding Connected Components
● Map-Reduce jobs on Hadoop
○ Not a good fit for graph algorithms: overhead.
● Google Pregel
○ Requires its own infrastructure
○ Not available
○ Master is single point of failure.
● Message Passing Interface (MPI)
○ Not fault-tolerant
○ Too generic
How Giraph differs
● You can use a Hadoop cluster, no need for
● Easy deployment with Amazon EMR
● Dynamic resource management
● Graph oriented API
● Open Source
● Fault Tolerant, no SPOF except Hadoop
namenode and jobtracker
● Jython Support
● Shared variables among the workers.
● Each vertex computation can add/multiply a
value to aggregators.
○ Holding the min/max value among all vertices
○ Holding sum of the vertex values.
○ Holding average value of vertex values.
○ Holding sum of mean square errors and stdev.
1 2 3
● Master’s compute() always runs before the
slaves (like pre-superstep)
○ In compute: aggregate vertex values: sum of values
○ In MasterCompute: average=sum/N
● Aggregators are registered here.
● You can set values to aggregators.
● Allows for the execution of user code on a
● There's one WorkerContext per worker.
● Methods for Pre/post superstep/application
Flexible Edge/Vertex Input
● Read edges/vertices from different sources.
● Multiple input resources
● More map jobs (workers) = parallel computing
● To overcome slowest worker problem,
multithreading is applied on
● Linear speedup in CPU-bound applications
such as k-means clustering due to
● Take a set of entrie machines & use
multithreading to maximize resource utilization.
● Vertices and edges are stored as serialized
● Used FastUtil-based Java primitives.
● Each aggregator is randomly assigned to one of the workers.
● The assigned worker is in charge of gathering the values of its aggregators
from all workers, performing the aggregation, and distributing the final values
to other workers.
● Aggregation responsibilities are balanced across all workers rather than
bottlenecked by the master.
● PageRank on 1 trillion edges with 200 commodity
machines: 4 minutes/iteration.
● K-Means on 1 billion input vectors x 100 features into
10.000 centroids: 10 minutes.
● Linear Scalability
● Version 1.0, on the way to 1.1
● Changing rapidly: backwards-incompatible
● Documentation not mature yet.
● More algorithms to be contributed.
● More data sources to be ported.
● http://giraph.apache.org for more info
Giraph: Large-scale graph processing infrastructure on Hadoop, 2011
Scaling Apache Giraph to a trillion edges, Avery Ching, Facebook, 2013
Scaling Apache Giraph, Nitay Joffe, Facebook, 2013.