Facebook's TAO & Unicorn data storage and search platforms
Graph Storage & Search at
A Systems Perspective …
• How do you store Petabytes of graph data ?
• How to efficiently serve billion reads and millions of writes each
• How to search trillions of edges between tens of billions of users with
a search latency ranging maximum in tens of milliseconds (1
millisecond average) ?
• TAO : A read optimized graph data store to server Facebook’s “Social
• Unicorn : Online, In-Memory social graph-aware search and indexing
1. Aggregating & Filtering hundreds of items.
2. Custom Tailored Page with extreme customization and privacy checks.
A walk down memory lane :
Scaling Memcache in Facebook (NSDI’ 13)
• Originally Facebook was built by storing social graph in MySQL and
aggressively cached with Memcache.
• Issues with the original architecture :
• Inefficient Edge lists manipulation. ( Key Value semantics require the entire
edge lists to be reloaded)
• Expensive Read-After-Write consistency : Asynchronous Master/Slave
replication poses a problem for caches in data centers using a replica.
Goals for TAO
• Providing access to nodes and edges of a
constantly changing graph in data centers
across multiple regions.
• Optimize on reads and favor availability over
• TAO does not implement complete graph
primitives but provide sufficient
expressiveness to handle most applications
• Example: Rendering a check-in would query
this event’s underlying nodes and edges every
time. Different users might see different
versions of this check-in.
Data Models and APIs
• Objects and Associations :
• Objects are nodes and Associations and edges.
• Objects are identified as 64-bit integer(id) and associations as (source,
destination) and a association type.
• At most one association of a given type exists between any two objects.
• Both associations and objects may contain key->value pairs.
• Actions may be encoded either as objects or associations ( comments are
• Although associations are directed, it is common for an association to be
tightly coupled with an inverse edge.
• Discovering the check-in object, however, requires the inbound edges or that
an id is stored in another Facebook system.
Object and Association APIs
• Object APIs :
• Allocate a new object and id.
• Retrieve, Update or Delete the object.
• There is no Compare and Set (due to eventual updated semantics).
• Association APIs :
• Edges could be bidirectional, either symmetrically like the example’s FRIEND
relationship or asymmetrically like AUTHORED and AUTHORED BY.
• Bidirectional edges are modeled as two separate associations. TAO provides
support for keeping associations in sync with their inverses, by allowing
association types to be configured with an inverse type.
• For such associations, creations, updates, and deletions are automatically
coupled with an operation on the inverse association.
• A characteristic of the social graph is that most of the data is old, but
many of the queries are for the newest subset. This creation-time
locality arises whenever an application focuses on recent items.
• For a famous celebrity ‘Justin’, then there might be thousands of
comments attached to his check-in, but only the most recent ones
will be rendered by default.
• TAO’s Association queries are organized around Association Lists.
They have associations, arranged in descending order by the time
field : (id1, type) → [anew ...aold]
• TAO enforces a per Association type upper bound (typically 6,000) on
the actual limit used for an association query. To enumerate the
elements of a longer association list the client must issue multiple
Key Ideas behind TAO’s architecture
The data is persisted using MySQL.
The API is mapped to a small number of SQL queries.
Data is divided into logical shards. By default all object types
are stored in one table and association in others.
Every “object_id” has a corresponding “shard_id”. Objects are
bounded to a single shard throughout their lifetime.
An association is stored on the shard of its id1, so that every
association query can be served from a single server.
TAO Architecture ( Continued … )
• A region / tier is made of multiple closely located Data centers.
• Multiple Cache Serves make up a tier (set of databases in a region are also called a
tier) that can collectively capable of answering any TAO Request.
• Each cache request maps to a server based on sharding scheme discussed.
• The cache is filled based on a LRU policy.
• Write operations on an association with an inverse may involve two shards, since the
forward edge is stored on the shard for id1 and the inverse edge is on the shard for
• Handling writes with multiple shards involve : Issuing an RPC call to the member
hosting id2, which will contact the database to create the inverse association. Once
the inverse write is complete, the caching server issues a write to the database for
• TAO does not provide atomicity between the two updates. If a failure occurs the
forward may exist without an inverse, these hanging associations are scheduled for
repair by an asynchronous job.
Leaders and Followers
• Builds a two level cache hierarchy (L1->L2). (All to All connections in
Single layer cache is susceptible to Hot Spots)
• Clients communicate with the closest followers directly.
• Each shard is hosted by one leader, and all writes to the shard go
through that leader, so it is naturally consistent. Followers, on the
other hand, must be explicitly notified of updates made via other
• An object update in the leader enqueues invalidation messages to
each corresponding follower.
• Leaders serialize concurrent writes that arrive from followers. Leader
protects databases from “Thundering herds” by not issuing
concurrent writes and limiting maximum number of queries.
• High read workload scales with total number of follower servers.
• The assumption is that latency between followers and leaders is low.
• Followers behave identically in all regions, forwarding read misses and
writes to the local region’s leader tier. Leaders query the local region’s
database regardless of whether it is the master or slave. This means
that read latency is independent of inter-region latency.
• Writes are forwarded by the local leader to the leader that is in the
region with the master database. Read misses by followers are 25X as
frequent as writes in the workload thus read misses are served locally.
• Facebook chooses data center locations that are clustered into only a
few regions, where the intra-region latency is small (typically less than
1 millisecond). It is then sufficient to store one complete copy of the
social graph per region.
Scaling Geographically …
• Since each cache hosts multiple shards, a server may be both a master and a
slave at the same time. It is preferred to locate all of the master databases in a
• When an inverse association is mastered in a different region, TAO must traverse
an extra inter-region link to forward the inverse write.
• TAO embeds invalidation and refill messages in the database replication stream.
These messages are delivered in a region immediately after a transaction has
been replicated to a slave database. Delivering such messages earlier would
create cache inconsistencies, as reading from the local database would provide
• If a forwarded write is successful then the local leader will update its cache with
the fresh value, even though the local slave database probably has not yet been
updated by the asynchronous replication stream. In this case followers will
receive two invalidates or refills from the write, one that is sent when the write
succeeds and one that is sent when the write’s transaction is replicated to the
local slave database.
• In the end consistency is The KEY !
• Imagine a scenario : Likes on your Facebook post magically increasing
or decreasing ?
• TAO’s master/slave design ensures that all reads can be satisfied
within a single region, at the expense of potentially returning stale
data to clients. As long as a user consistently queries the same
follower tier, the user will typically have a consistent view of TAO
• All the data related to objects are serialized into a single ‘data’ column
( supporting flexible schema).
• Shards are mapped to cache servers using Consistent Hashing. TAO
rebalances load among followers with shard cloning, in which reads
to a shard are served by multiple followers in a tier.
• Versioning is used to omit replies if data has not changed.
• The master database is a consistent source of truth. We can mark
certain requests as critical and proxy them to master (authentication)
Failure Detection and Handling
• TAO servers employ aggressive network timeouts so as not to continue
waiting on responses that may never arrive.
• Databases are marked down in a global configuration if they crash / taken
offline for maintenance or if they get too far behind. When a master
database is down, one of its slaves is automatically promoted to be the
• Followers Failure
• Followers in other tier (Backup) share the responsibility of the shard.
• Leader Failure
• Followers route read requests around it directly to database.
• Write requests are rerouted to a random member of leader’s tier.
• Invalidation Message Failure
• Leaders queue message to disk if followers are unreachable.
• If a leader failure occurs and is replaced : All shards that map to it must be
invalidated in the followers, to restore consistency.
Some Performance Metrics
Replication: TAO’s slave storage servers lag their master by
less than 1 second during 85% of the tracing window, by
less than 3 seconds 99% of the time, and by less than 10
seconds 99.8% of the time.
What is Unicorn?
• Online, In-Memory “social graph aware” indexing system serving
billions of query a day.
• The idea is to promote social proximity.
• Serves as the backend infrastructure for graph search.
• Searching all basic structured information on the social graph and
perform complex set of operations on the results.
• Why a big deal ?
Facebook engineer’s joked that – much like the mythical quadruped—this system would
solve all of our problems and heal our woes if only it existed.
Core Technical Ideas
• Applying common information retrieval architectural concepts in the
domain of social graph search.
• How do you promote socially relevant search results ?
• Building rich operators ( apply & extract ) that allow rich semantic
graph queries that allow multiple round trip algorithms for serving
Data Model for Graph Search
• There are billions of users in social graph. An average user is friend
has approximately 130 friends.
• Best way to implement social graph (sparse) : Adjacency Lists.
Hits = Results
Posting List = Adjacency Lists
Hit Data is extra meta data.
Sort key helps us find globally important
Unicorn API & Popular Edge Types
• Client sends ‘Thrift’ requests to server. (Facebook’s own Protocol –
• Request is routed to closest Unicorn server.
• Several operators supported : Or, And, Difference.
• Meta-Data : ‘graduation year’ and ‘major’ for attended.
• In Distributed Systems: Never ever forget to
• All Posting lists are sharded by ‘result_id’.
• Index servers store adjacency lists and perform
set operations on those lists.
• Each index server is responsible for a particular
• Rack Aggregator benefits from the fact that
bandwidth to servers within a rack is higher.
Building and Updating Index
• Raw data is scraped from MySQL and indexes are built with Hadoop.
• The data is accessible via Hive.
• To avoid lag (common in batch processing). For pushing latest minute
data : Facebook uses Scribe.
• Each index server keep tracks of the last updated timestamp.
• It all started with a Type Ahead Search.
• Users are shown a list of possible match for the query as they are
• Index servers for Type Ahead contain posting lists for every name
prefix up to a predefined character limit.
• These posting lists contain the ids of users whose first or last name
matches the prefix.
• A simple Type ahead implementation would merely map input
prefixes to the posting lists for those prefixes and return the resultant
• How do you make this socially relevant ?
Serving Socially Relevant Results
• How do you ensure that search results are
socially relevant ?
• Can we “AND” the solution with the friend list
of user ?
( Ignores results for users who might be relevant but
are not friends with the user performing the search).
• We actually want a way to force some fraction
of the final results to possess a trait, while not
requiring this trait from all results.
• The answer is WeakAnd operator.
• The WeakAnd operator is a modification of
And that allows operands to be missing from
some fraction of the results within an index
• Implementation : Allow only finite number of
hits to be non-friends.
Priscilla Chan (3), looking for : “Melanie Mars” ….
• Requires certain operands to be present
in some fraction of the matches.
• Enforces diversity in the set.
• Example : Fetching geographically
diversity in the result set.
• At least 20% from San Francisco.
• An optional weight parameter as well.
Scoring Search Results
• We might want to prioritize results for individuals who are in close in age to
the user typing the query.
• This requires that we store the age (or birth date) of users with the index.
• For storing per-entity metadata, Unicorn provides a forward index, which is
simply a map of id to a blob that contains metadata for the id. The forward
index for an index shard only contains entries for the ids that reside on that
• Based on Thrift parameters included with the client’s request, the client
can select a scoring function in the index server that scores each result.
• Aggregators give priority to documents with higher score.
• Our discussion of graph search spans : users, pages, apps, events etc.
• Imagine a scenario : We might want to know the pages liked by friends of
Bill who likes Trekking :
1. First execute the query (and friend:7 likers:42)
2. Collecting the results, and create a new query that produces the union of the
pages liked by any of these individuals.
• Inefficient due to multiple round trips involved between index servers and
• The ability to use the results of previous executions as seeds for future
executions creates new applications for a search system, and was the
inspiration for Facebook’s Graph Search consumer product. The idea was to
build a general-purpose, online system for users to find entities in the
social graph that matched a set of user-defined constraints.
• A graph traversal operator that allows
client to query a set of ids and then use the
resultant ids to construct and execute a
• Apply is a ‘syntactic sugar’ to allow a
system to perform expensive operations
lower in the hardware stack. However, by
allowing clients to show semantic intent,
optimizations are possible to preserve
• Say you want to look up people tagged in
photos of “Jon Jones”.
• Solution: ‘Apply’ operator to look up photos
of Jon Jones in the photos vertical and then
to query the users vertical for people
tagged in these photos.
• Now you need hundred of billions of new
terms in users vertical.
• Billions of “one to few” mapping.
• Better way: Store the ids of people tagged
in a photo in the forward index data for that
photo in the photos vertical. This is a case
where partially de-normalizing. We thus
store the result ids in the forward index of
the secondary vertical and do the lookup
• This is exactly what Extract operator
• Privacy is crucial !
• Certain graph edges cannot be shown to all users but rather only to
users who are friends with or in the same network as a particular
• Unicorn itself does not have privacy information incorporated into its
index : Strict consistency and durability guarantees are absent that
are needed for a full privacy solution.
• Facebook PHP frontend makes a proper privacy check on the result.
This design decision imposes a modest efficiency penalty on the
• However it also keeps privacy logic separate from Unicorn with the
DRY (“Don’t Repeat Yourself”) principle of software development
Lineage : Preserving Privacy
To enable clients to make privacy decisions, a string of
metadata is attached to each search result to describe
Lineage is a structured representation of the edges
that were traversed in order to yield a result.