Scaling LinkedIn - A Brief History

Josh Clemm
Josh ClemmEngineering and Product at Uber
Josh Clemm
www.linkedin.com/in/joshclemm
SCALING LINKEDIN
A BRIEF HISTORY
Scaling = replacing all the components
of a car while driving it at 100mph
“
Via Mike Krieger, “Scaling Instagram”
LinkedIn started back in 2003 to
“connect to your network for better job
opportunities.”
It had 2700 members in first week.
First week growth guesses from founding team
0M
50M
300M
250M
200M
150M
100M
400M
32M
2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015
5
400M
350M
Fast forward to today...
LinkedIn is a global site with over 400 million
members
Web pages and mobile traffic are served at
tens of thousands of queries per second
Backend systems serve millions of queries per
second
LINKEDIN SCALE TODAY
7
How did we get there?
Let’s start from
the beginning
LEO
DB
DB
LEO
● Huge monolithic app
called Leo
● Java, JSP, Servlets,
JDBC
● Served every page,
same SQL database
LEO
Circa 2003
LINKEDIN’S ORIGINAL ARCHITECTURE
So far so good, but two areas to improve:
1. The growing member to member
connection graph
2. The ability to search those members
● Needed to live in-memory for top
performance
● Used graph traversal queries not suitable for
the shared SQL database.
● Different usage profile than other parts of site
MEMBER CONNECTION GRAPH
MEMBER CONNECTION GRAPH
So, a dedicated service was created.
LinkedIn’s first service.
● Needed to live in-memory for top
performance
● Used graph traversal queries not suitable for
the shared SQL database.
● Different usage profile than other parts of site
● Social networks need powerful search
● Lucene was used on top of our member graph
MEMBER SEARCH
● Social networks need powerful search
● Lucene was used on top of our member graph
MEMBER SEARCH
LinkedIn’s second service.
LINKEDIN WITH CONNECTION GRAPH
AND SEARCH
Member
GraphLEO
DB
RPC
Circa 2004
Lucene
Connection / Profile Updates
Getting better, but the single database was
under heavy load.
Vertically scaling helped, but we needed to
offload the read traffic...
● Master/slave concept
● Read-only traffic from replica
● Writes go to main DB
● Early version of Databus kept DBs in sync
REPLICA DBs
Main DB
Replica
ReplicaDatabus
relay Replica DB
● Good medium term solution
● We could vertically scale servers for a while
● Master DBs have finite scaling limits
● These days, LinkedIn DBs use partitioning
REPLICA DBs TAKEAWAYS
Main DB
Replica
ReplicaDatabus
relay Replica DB
Member
GraphLEO
RPC
Main DB
ReplicaReplicaDatabus relay Replica DB
Connection
Updates
R/WR/O
Circa 2006
LINKEDIN WITH REPLICA DBs
Search
Profile
Updates
As LinkedIn continued to grow, the
monolithic application Leo was becoming
problematic.
Leo was difficult to release, debug, and the
site kept going down...
Scaling LinkedIn - A Brief History
Scaling LinkedIn - A Brief History
Scaling LinkedIn - A Brief History
Kill LEOIT WAS TIME TO...
Public Profile
Web App
Profile Service
LEO
Recruiter Web
App
Yet another
Service
Extracting services (Java Spring MVC) from
legacy Leo monolithic application
Circa 2008 on
SERVICE ORIENTED ARCHITECTURE
● Goal - create vertical stack of
stateless services
● Frontend servers fetch data
from many domains, build
HTML or JSON response
● Mid-tier services host APIs,
business logic
● Data-tier or back-tier services
encapsulate data domains
Profile Web
App
Profile
Service
Profile DB
SERVICE ORIENTED ARCHITECTURE
Scaling LinkedIn - A Brief History
Groups
Content
Service
Connections
Content
Service
Profile
Content
Service
Browser / App
Frontend
Web App
Mid-tier
Service
Mid-tier
Service
Mid-tier
Service
Edu Data
Service
Data
Service
Hadoop
DB Voldemort
EXAMPLE MULTI-TIER ARCHITECTURE AT LINKEDIN
Kafka
PROS
● Stateless services
easily scale
● Decoupled domains
● Build and deploy
independently
CONS
● Ops overhead
● Introduces backwards
compatibility issues
● Leads to complex call
graphs and fanout
SERVICE ORIENTED ARCHITECTURE COMPARISON
bash$ eh -e %%prod | awk -F. '{ print $2 }' | sort | uniq | wc -l
756
● In 2003, LinkedIn had one service (Leo)
● By 2010, LinkedIn had over 150 services
● Today in 2015, LinkedIn has over 750 services
SERVICES AT LINKEDIN
Getting better, but LinkedIn was
experiencing hypergrowth...
Scaling LinkedIn - A Brief History
● Simple way to reduce load on
servers and speed up responses
● Mid-tier caches store derived
objects from different domains,
reduce fanout
● Caches in the data layer
● We use memcache, couchbase,
even Voldemort
Frontend
Web App
Mid-tier
Service
Cache
DB
Cache
CACHING
There are only two hard problems in
Computer Science:
Cache invalidation, naming things, and
off-by-one errors.
“
Via Twitter by Kellan Elliott-McCrea
and later Jonathan Feinberg
CACHING TAKEAWAYS
● Caches are easy to add in the beginning, but
complexity adds up over time.
● Over time LinkedIn removed many mid-tier
caches because of the complexity around
invalidation
● We kept caches closer to data layer
CACHING TAKEAWAYS (cont.)
● Services must handle full load - caches
improve speed, not permanent load bearing
solutions
● We’ll use a low latency solution like
Voldemort when appropriate and precompute
results
LinkedIn’s hypergrowth was extending to
the vast amounts of data it collected.
Individual pipelines to route that data
weren’t scaling. A better solution was
needed...
Scaling LinkedIn - A Brief History
KAFKA MOTIVATIONS
● LinkedIn generates a ton of data
○ Pageviews
○ Edits on profile, companies, schools
○ Logging, timing
○ Invites, messaging
○ Tracking
● Billions of events everyday
● Separate and independently created pipelines
routed this data
A WHOLE LOT OF CUSTOM PIPELINES...
A WHOLE LOT OF CUSTOM PIPELINES...
As LinkedIn needed to scale, each pipeline
needed to scale.
Distributed pub-sub messaging platform as LinkedIn’s
universal data pipeline
KAFKA
Kafka
Frontend
service
Frontend
service
Backend
Service
DWH Monitoring Analytics HadoopOracle
BENEFITS
● Enabled near realtime access to any data source
● Empowered Hadoop jobs
● Allowed LinkedIn to build realtime analytics
● Vastly improved site monitoring capability
● Enabled devs to visualize and track call graphs
● Over 1 trillion messages published per day, 10 million
messages per second
KAFKA AT LINKEDIN
OVER 1 TRILLION PUBLISHED DAILY
OVER 1 TRILLION PUBLISHED DAILY
Let’s end with
the modern years
Scaling LinkedIn - A Brief History
● Services extracted from Leo or created new
were inconsistent and often tightly coupled
● Rest.li was our move to a data model centric
architecture
● It ensured a consistent stateless Restful API
model across the company.
REST.LI
● By using JSON over HTTP, our new APIs
supported non-Java-based clients.
● By using Dynamic Discovery (D2), we got
load balancing, discovery, and scalability of
each service API.
● Today, LinkedIn has 1130+ Rest.li resources
and over 100 billion Rest.li calls per day
REST.LI (cont.)
Rest.li Automatic API-documentation
REST.LI (cont.)
Rest.li R2/D2 tech stack
REST.LI (cont.)
LinkedIn’s success with Data infrastructure
like Kafka and Databus led to the
development of more and more scalable
Data infrastructure solutions...
● It was clear LinkedIn could build data
infrastructure that enables long term growth
● LinkedIn doubled down on infra solutions like:
○ Storage solutions
■ Espresso, Voldemort, Ambry (media)
○ Analytics solutions like Pinot
○ Streaming solutions
■ Kafka, Databus, and Samza
○ Cloud solutions like Helix and Nuage
DATA INFRASTRUCTURE
DATABUS
LinkedIn is a global company and was
continuing to see large growth. How else
to scale?
● Natural progression of horizontally scaling
● Replicate data across many data centers using
storage technology like Espresso
● Pin users to geographically close data center
● Difficult but necessary
MULTIPLE DATA CENTERS
● Multiple data centers are imperative to
maintain high availability.
● You need to avoid any single point of failure
not just for each service, but the entire site.
● LinkedIn runs out of three main data centers,
additional PoPs around the globe, and more
coming online every day...
MULTIPLE DATA CENTERS
MULTIPLE DATA CENTERS
LinkedIn's operational setup as of 2015
(circles represent data centers, diamonds represent PoPs)
Of course LinkedIn’s scaling story is never
this simple, so what else have we done?
● Each of LinkedIn’s critical systems have
undergone their own rich history of scale
(graph, search, analytics, profile backend,
comms, feed)
● LinkedIn uses Hadoop / Voldemort for insights
like People You May Know, Similar profiles,
Notable Alumni, and profile browse maps.
WHAT ELSE HAVE WE DONE?
● Re-architected frontend approach using
○ Client templates
○ BigPipe
○ Play Framework
● LinkedIn added multiple tiers of proxies using
Apache Traffic Server and HAProxy
● We improved the performance of servers with
new hardware, advanced system tuning, and
newer Java runtimes.
WHAT ELSE HAVE WE DONE? (cont.)
Scaling sounds easy and quick to do, right?
Hofstadter's Law: It always takes longer
than you expect, even when you take
into account Hofstadter's Law.
“
Via  Douglas Hofstadter,
Gödel, Escher, Bach: An Eternal Golden Braid
Josh Clemm
www.linkedin.com/in/joshclemm
THANKS!
● Blog version of this slide deck
https://engineering.linkedin.com/architecture/brief-history-scaling-linkedin
● Visual story of LinkedIn’s history
https://ourstory.linkedin.com/
● LinkedIn Engineering blog
https://engineering.linkedin.com
● LinkedIn Open-Source
https://engineering.linkedin.com/open-source
● LinkedIn’s communication system slides which
include earliest LinkedIn architecture http://www.slideshare.
net/linkedin/linkedins-communication-architecture
● Slides which include earliest LinkedIn data infra work
http://www.slideshare.net/r39132/linkedin-data-infrastructure-qcon-london-2012
LEARN MORE
● Project Inversion - internal project to enable developer
productivity (trunk based model), faster deploys, unified
services
http://www.bloomberg.com/bw/articles/2013-04-10/inside-operation-inversion-the-code-
freeze-that-saved-linkedin
● LinkedIn’s use of Apache Traffic server
http://www.slideshare.net/thenickberry/reflecting-a-year-after-migrating-to-apache-traffic-
server
● Multi Data Center - testing fail overs
https://www.linkedin.com/pulse/armen-hamstra-how-he-broke-linkedin-got-promoted-
angel-au-yeung
LEARN MORE (cont.)
● History and motivation around Kafka
http://www.confluent.io/blog/stream-data-platform-1/
● Thinking about streaming solutions as a commit log
https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-
should-know-about-real-time-datas-unifying
● Kafka enabling monitoring and alerting
http://engineering.linkedin.com/52/autometrics-self-service-metrics-collection
● Kafka enabling real-time analytics (Pinot)
http://engineering.linkedin.com/analytics/real-time-analytics-massive-scale-pinot
● Kafka’s current use and future at LinkedIn
http://engineering.linkedin.com/kafka/kafka-linkedin-current-and-future
● Kafka processing 1 trillion events per day
https://engineering.linkedin.com/apache-kafka/how-we_re-improving-and-advancing-
kafka-linkedin
LEARN MORE - KAFKA
● Open sourcing Databus
https://engineering.linkedin.com/data-replication/open-sourcing-databus-linkedins-low-
latency-change-data-capture-system
● Samza streams to help LinkedIn view call graphs
https://engineering.linkedin.com/samza/real-time-insights-linkedins-performance-using-
apache-samza
● Real-time analytics (Pinot)
http://engineering.linkedin.com/analytics/real-time-analytics-massive-scale-pinot
● Introducing Espresso data store
http://engineering.linkedin.com/espresso/introducing-espresso-linkedins-hot-new-
distributed-document-store
LEARN MORE - DATA INFRASTRUCTURE
● LinkedIn’s use of client templates
○ Dust.js
http://www.slideshare.net/brikis98/dustjs
○ Profile
http://engineering.linkedin.com/profile/engineering-new-linkedin-profile
● Big Pipe on LinkedIn’s homepage
http://engineering.linkedin.com/frontend/new-technologies-new-linkedin-home-page
● Play Framework
○ Introduction at LinkedIn https://engineering.linkedin.
com/play/composable-and-streamable-play-apps
○ Switching to non-block asynchronous model
https://engineering.linkedin.com/play/play-framework-async-io-without-thread-pool-
and-callback-hell
LEARN MORE - FRONTEND TECH
● Introduction to Rest.li and how it helps LinkedIn scale
http://engineering.linkedin.com/architecture/restli-restful-service-architecture-scale
● How Rest.li expanded across the company
http://engineering.linkedin.com/restli/linkedins-restli-moment
LEARN MORE - REST.LI
● JVM memory tuning
http://engineering.linkedin.com/garbage-collection/garbage-collection-optimization-high-
throughput-and-low-latency-java-applications
● System tuning
http://engineering.linkedin.com/performance/optimizing-linux-memory-management-
low-latency-high-throughput-databases
● Optimizing JVM tuning automatically
https://engineering.linkedin.com/java/optimizing-java-cms-garbage-collections-its-
difficulties-and-using-jtune-solution
LEARN MORE - SYSTEM TUNING
LinkedIn continues to grow quickly and there’s
still a ton of work we can do to improve.
We’re working on problems that very few ever
get to solve - come join us!
WE’RE HIRING
Scaling LinkedIn - A Brief History
1 of 73

Recommended

Chatbot ppt by
Chatbot pptChatbot ppt
Chatbot pptGeff Thomas
67.4K views10 slides
Taking the Agile Transformation Journey by
Taking the Agile Transformation Journey Taking the Agile Transformation Journey
Taking the Agile Transformation Journey Accenture Insurance
715 views16 slides
GitHub Copilot.pptx by
GitHub Copilot.pptxGitHub Copilot.pptx
GitHub Copilot.pptxLuis Beltran
7.4K views33 slides
GENERATIVE AI, THE FUTURE OF PRODUCTIVITY by
GENERATIVE AI, THE FUTURE OF PRODUCTIVITYGENERATIVE AI, THE FUTURE OF PRODUCTIVITY
GENERATIVE AI, THE FUTURE OF PRODUCTIVITYAndre Muscat
6.6K views19 slides
Six Signs You Need Platform Engineering by
Six Signs You Need Platform EngineeringSix Signs You Need Platform Engineering
Six Signs You Need Platform EngineeringWeaveworks
135 views26 slides
Everything to know about ChatGPT by
Everything to know about ChatGPTEverything to know about ChatGPT
Everything to know about ChatGPTKnoldus Inc.
7.1K views19 slides

More Related Content

What's hot

Introduction to ChatGPT and Overview of its capabilities and functionality.pdf by
Introduction to ChatGPT and Overview of its capabilities and functionality.pdfIntroduction to ChatGPT and Overview of its capabilities and functionality.pdf
Introduction to ChatGPT and Overview of its capabilities and functionality.pdfAD Techlogix - Website & Mobile App Development Company
3.6K views16 slides
Apigee Edge Overview and Roadmap by
Apigee Edge Overview and RoadmapApigee Edge Overview and Roadmap
Apigee Edge Overview and RoadmapApigee | Google Cloud
4.8K views19 slides
Reactive Web Best Practices by
Reactive Web Best PracticesReactive Web Best Practices
Reactive Web Best PracticesOutSystems
2.3K views63 slides
Platform Engineering by
Platform EngineeringPlatform Engineering
Platform EngineeringOpsta
1.9K views50 slides
Digital Transformation Strategy Template and Training by
Digital Transformation Strategy Template and TrainingDigital Transformation Strategy Template and Training
Digital Transformation Strategy Template and TrainingAurelien Domont, MBA
17.2K views45 slides
Platform engineering 101 by
Platform engineering 101Platform engineering 101
Platform engineering 101Sander Knape
2.6K views90 slides

What's hot(20)

Reactive Web Best Practices by OutSystems
Reactive Web Best PracticesReactive Web Best Practices
Reactive Web Best Practices
OutSystems2.3K views
Platform Engineering by Opsta
Platform EngineeringPlatform Engineering
Platform Engineering
Opsta1.9K views
Digital Transformation Strategy Template and Training by Aurelien Domont, MBA
Digital Transformation Strategy Template and TrainingDigital Transformation Strategy Template and Training
Digital Transformation Strategy Template and Training
Aurelien Domont, MBA17.2K views
Platform engineering 101 by Sander Knape
Platform engineering 101Platform engineering 101
Platform engineering 101
Sander Knape2.6K views
ChatGPT-the-revolution-is-coming.pdf by Liang Yan
ChatGPT-the-revolution-is-coming.pdfChatGPT-the-revolution-is-coming.pdf
ChatGPT-the-revolution-is-coming.pdf
Liang Yan2.9K views
API Management Solution Powerpoint Presentation Slides by SlideTeam
API Management Solution Powerpoint Presentation SlidesAPI Management Solution Powerpoint Presentation Slides
API Management Solution Powerpoint Presentation Slides
SlideTeam1.1K views
API Management Part 1 - An Introduction to Azure API Management by BizTalk360
API Management Part 1 - An Introduction to Azure API ManagementAPI Management Part 1 - An Introduction to Azure API Management
API Management Part 1 - An Introduction to Azure API Management
BizTalk3605.4K views
Gitops: the kubernetes way by sparkfabrik
Gitops: the kubernetes wayGitops: the kubernetes way
Gitops: the kubernetes way
sparkfabrik2.1K views
Using the power of Generative AI at scale by Maxim Salnikov
Using the power of Generative AI at scaleUsing the power of Generative AI at scale
Using the power of Generative AI at scale
Maxim Salnikov910 views
An Introduction to Generative AI by Cori Faklaris
An Introduction  to Generative AIAn Introduction  to Generative AI
An Introduction to Generative AI
Cori Faklaris11.4K views
ChatGPT Deck.pptx by omornahid1
ChatGPT Deck.pptxChatGPT Deck.pptx
ChatGPT Deck.pptx
omornahid14.1K views
APIsecure 2023 - API orchestration: to build resilient applications, Cherish ... by apidays
APIsecure 2023 - API orchestration: to build resilient applications, Cherish ...APIsecure 2023 - API orchestration: to build resilient applications, Cherish ...
APIsecure 2023 - API orchestration: to build resilient applications, Cherish ...
apidays76 views
Microservices, DevOps, and Continuous Delivery by Khalid Salama
Microservices, DevOps, and Continuous DeliveryMicroservices, DevOps, and Continuous Delivery
Microservices, DevOps, and Continuous Delivery
Khalid Salama1.7K views
Highlights of WSO2 API Manager 4.0.0 by WSO2
Highlights of WSO2 API Manager 4.0.0Highlights of WSO2 API Manager 4.0.0
Highlights of WSO2 API Manager 4.0.0
WSO21.1K views

Viewers also liked

Lessons learned from growing LinkedIn to 400m members - Growth Hackers Confer... by
Lessons learned from growing LinkedIn to 400m members - Growth Hackers Confer...Lessons learned from growing LinkedIn to 400m members - Growth Hackers Confer...
Lessons learned from growing LinkedIn to 400m members - Growth Hackers Confer...Aatif Awan
18.3K views22 slides
LinkedIn Networking for Professionals by
LinkedIn Networking for ProfessionalsLinkedIn Networking for Professionals
LinkedIn Networking for ProfessionalsChristine Dubyts
2.2K views18 slides
LinkedIn presentation by
LinkedIn presentationLinkedIn presentation
LinkedIn presentationjkwong5
22K views35 slides
A Business case study on LinkedIn by
A Business case study on LinkedInA Business case study on LinkedIn
A Business case study on LinkedInMayank Banerjee
28.8K views64 slides
How LinkedIn built a Community of Half a Billion by
How LinkedIn built a Community of Half a BillionHow LinkedIn built a Community of Half a Billion
How LinkedIn built a Community of Half a BillionAatif Awan
591.8K views27 slides
Linkedin Series B Pitch Deck by
Linkedin Series B Pitch DeckLinkedin Series B Pitch Deck
Linkedin Series B Pitch DeckJoseph Hsieh
6.8M views37 slides

Viewers also liked(6)

Lessons learned from growing LinkedIn to 400m members - Growth Hackers Confer... by Aatif Awan
Lessons learned from growing LinkedIn to 400m members - Growth Hackers Confer...Lessons learned from growing LinkedIn to 400m members - Growth Hackers Confer...
Lessons learned from growing LinkedIn to 400m members - Growth Hackers Confer...
Aatif Awan18.3K views
LinkedIn Networking for Professionals by Christine Dubyts
LinkedIn Networking for ProfessionalsLinkedIn Networking for Professionals
LinkedIn Networking for Professionals
Christine Dubyts2.2K views
LinkedIn presentation by jkwong5
LinkedIn presentationLinkedIn presentation
LinkedIn presentation
jkwong522K views
A Business case study on LinkedIn by Mayank Banerjee
A Business case study on LinkedInA Business case study on LinkedIn
A Business case study on LinkedIn
Mayank Banerjee28.8K views
How LinkedIn built a Community of Half a Billion by Aatif Awan
How LinkedIn built a Community of Half a BillionHow LinkedIn built a Community of Half a Billion
How LinkedIn built a Community of Half a Billion
Aatif Awan591.8K views
Linkedin Series B Pitch Deck by Joseph Hsieh
Linkedin Series B Pitch DeckLinkedin Series B Pitch Deck
Linkedin Series B Pitch Deck
Joseph Hsieh6.8M views

Similar to Scaling LinkedIn - A Brief History

Symphony Driver Essay by
Symphony Driver EssaySymphony Driver Essay
Symphony Driver EssayAngie Jorgensen
2 views76 slides
Common Characteristics Of Wireless Devices by
Common Characteristics Of Wireless DevicesCommon Characteristics Of Wireless Devices
Common Characteristics Of Wireless DevicesMichelle Benedict
3 views47 slides
Managing Large Flask Applications On Google App Engine (GAE) by
Managing Large Flask Applications On Google App Engine (GAE)Managing Large Flask Applications On Google App Engine (GAE)
Managing Large Flask Applications On Google App Engine (GAE)Emmanuel Olowosulu
442 views36 slides
LinkedIn Graph Presentation by
LinkedIn Graph PresentationLinkedIn Graph Presentation
LinkedIn Graph PresentationAmy W. Tang
4.7K views28 slides
Just do it! by
Just do it!Just do it!
Just do it!CloudBees
645 views18 slides
Security Policies For Schema Less Or Dynamic Schema... by
Security Policies For Schema Less Or Dynamic Schema...Security Policies For Schema Less Or Dynamic Schema...
Security Policies For Schema Less Or Dynamic Schema...Christina Boetel
3 views47 slides

Similar to Scaling LinkedIn - A Brief History(20)

Managing Large Flask Applications On Google App Engine (GAE) by Emmanuel Olowosulu
Managing Large Flask Applications On Google App Engine (GAE)Managing Large Flask Applications On Google App Engine (GAE)
Managing Large Flask Applications On Google App Engine (GAE)
Emmanuel Olowosulu442 views
LinkedIn Graph Presentation by Amy W. Tang
LinkedIn Graph PresentationLinkedIn Graph Presentation
LinkedIn Graph Presentation
Amy W. Tang4.7K views
Just do it! by CloudBees
Just do it!Just do it!
Just do it!
CloudBees645 views
Security Policies For Schema Less Or Dynamic Schema... by Christina Boetel
Security Policies For Schema Less Or Dynamic Schema...Security Policies For Schema Less Or Dynamic Schema...
Security Policies For Schema Less Or Dynamic Schema...
Achieving cyber mission assurance with near real-time impact by Elasticsearch
Achieving cyber mission assurance with near real-time impactAchieving cyber mission assurance with near real-time impact
Achieving cyber mission assurance with near real-time impact
Elasticsearch1.4K views
Cloud Architecture Tutorial - Why and What (1of 3) by Adrian Cockcroft
Cloud Architecture Tutorial - Why and What (1of 3) Cloud Architecture Tutorial - Why and What (1of 3)
Cloud Architecture Tutorial - Why and What (1of 3)
Adrian Cockcroft27.4K views
The great migration embracing serverless first by AngelaTimofte1
The great migration  embracing serverless first The great migration  embracing serverless first
The great migration embracing serverless first
AngelaTimofte168 views
Building data pipelines at Shopee with DEC by Rim Zaidullin
Building data pipelines at Shopee with DECBuilding data pipelines at Shopee with DEC
Building data pipelines at Shopee with DEC
Rim Zaidullin534 views
#dbhouseparty - Should I be building Microservices? by Tammy Bednar
#dbhouseparty - Should I be building Microservices?#dbhouseparty - Should I be building Microservices?
#dbhouseparty - Should I be building Microservices?
Tammy Bednar130 views
LinkedIn Infrastructure (analytics@webscale, at fb 2013) by Jun Rao
LinkedIn Infrastructure (analytics@webscale, at fb 2013)LinkedIn Infrastructure (analytics@webscale, at fb 2013)
LinkedIn Infrastructure (analytics@webscale, at fb 2013)
Jun Rao2.9K views
Weathering the Data Storm – How SnapLogic and AWS Deliver Analytics in the Cl... by SnapLogic
Weathering the Data Storm – How SnapLogic and AWS Deliver Analytics in the Cl...Weathering the Data Storm – How SnapLogic and AWS Deliver Analytics in the Cl...
Weathering the Data Storm – How SnapLogic and AWS Deliver Analytics in the Cl...
SnapLogic3.4K views
Lessons from Building Large-Scale, Multi-Cloud, SaaS Software at Databricks by Databricks
Lessons from Building Large-Scale, Multi-Cloud, SaaS Software at DatabricksLessons from Building Large-Scale, Multi-Cloud, SaaS Software at Databricks
Lessons from Building Large-Scale, Multi-Cloud, SaaS Software at Databricks
Databricks892 views
From Monoliths to Services: Paying Your Technical Debt by TechWell
From Monoliths to Services: Paying Your Technical DebtFrom Monoliths to Services: Paying Your Technical Debt
From Monoliths to Services: Paying Your Technical Debt
TechWell327 views
Linked in stream experimentation framework by Joseph Adler
Linked in stream experimentation frameworkLinked in stream experimentation framework
Linked in stream experimentation framework
Joseph Adler1.6K views
Lightbend Fast Data Platform by Lightbend
Lightbend Fast Data PlatformLightbend Fast Data Platform
Lightbend Fast Data Platform
Lightbend647 views
CQRS recipes or how to cook your architecture by Thomas Jaskula
CQRS recipes or how to cook your architectureCQRS recipes or how to cook your architecture
CQRS recipes or how to cook your architecture
Thomas Jaskula18K views
Running Business Analytics for a Serverless Insurance Company - Joe Emison & ... by Daniel Zivkovic
Running Business Analytics for a Serverless Insurance Company - Joe Emison & ...Running Business Analytics for a Serverless Insurance Company - Joe Emison & ...
Running Business Analytics for a Serverless Insurance Company - Joe Emison & ...
Daniel Zivkovic281 views

Recently uploaded

BCIC - Manufacturing Conclave - Technology-Driven Manufacturing for Growth by
BCIC - Manufacturing Conclave -  Technology-Driven Manufacturing for GrowthBCIC - Manufacturing Conclave -  Technology-Driven Manufacturing for Growth
BCIC - Manufacturing Conclave - Technology-Driven Manufacturing for GrowthInnomantra
6 views4 slides
SUMIT SQL PROJECT SUPERSTORE 1.pptx by
SUMIT SQL PROJECT SUPERSTORE 1.pptxSUMIT SQL PROJECT SUPERSTORE 1.pptx
SUMIT SQL PROJECT SUPERSTORE 1.pptxSumit Jadhav
18 views26 slides
2023Dec ASU Wang NETR Group Research Focus and Facility Overview.pptx by
2023Dec ASU Wang NETR Group Research Focus and Facility Overview.pptx2023Dec ASU Wang NETR Group Research Focus and Facility Overview.pptx
2023Dec ASU Wang NETR Group Research Focus and Facility Overview.pptxlwang78
109 views19 slides
Codes and Conventions.pptx by
Codes and Conventions.pptxCodes and Conventions.pptx
Codes and Conventions.pptxIsabellaGraceAnkers
13 views5 slides
Update 42 models(Diode/General ) in SPICE PARK(DEC2023) by
Update 42 models(Diode/General ) in SPICE PARK(DEC2023)Update 42 models(Diode/General ) in SPICE PARK(DEC2023)
Update 42 models(Diode/General ) in SPICE PARK(DEC2023)Tsuyoshi Horigome
38 views16 slides
REACTJS.pdf by
REACTJS.pdfREACTJS.pdf
REACTJS.pdfArthyR3
34 views16 slides

Recently uploaded(20)

BCIC - Manufacturing Conclave - Technology-Driven Manufacturing for Growth by Innomantra
BCIC - Manufacturing Conclave -  Technology-Driven Manufacturing for GrowthBCIC - Manufacturing Conclave -  Technology-Driven Manufacturing for Growth
BCIC - Manufacturing Conclave - Technology-Driven Manufacturing for Growth
Innomantra 6 views
SUMIT SQL PROJECT SUPERSTORE 1.pptx by Sumit Jadhav
SUMIT SQL PROJECT SUPERSTORE 1.pptxSUMIT SQL PROJECT SUPERSTORE 1.pptx
SUMIT SQL PROJECT SUPERSTORE 1.pptx
Sumit Jadhav 18 views
2023Dec ASU Wang NETR Group Research Focus and Facility Overview.pptx by lwang78
2023Dec ASU Wang NETR Group Research Focus and Facility Overview.pptx2023Dec ASU Wang NETR Group Research Focus and Facility Overview.pptx
2023Dec ASU Wang NETR Group Research Focus and Facility Overview.pptx
lwang78109 views
Update 42 models(Diode/General ) in SPICE PARK(DEC2023) by Tsuyoshi Horigome
Update 42 models(Diode/General ) in SPICE PARK(DEC2023)Update 42 models(Diode/General ) in SPICE PARK(DEC2023)
Update 42 models(Diode/General ) in SPICE PARK(DEC2023)
REACTJS.pdf by ArthyR3
REACTJS.pdfREACTJS.pdf
REACTJS.pdf
ArthyR334 views
MongoDB.pdf by ArthyR3
MongoDB.pdfMongoDB.pdf
MongoDB.pdf
ArthyR345 views
Design_Discover_Develop_Campaign.pptx by ShivanshSeth6
Design_Discover_Develop_Campaign.pptxDesign_Discover_Develop_Campaign.pptx
Design_Discover_Develop_Campaign.pptx
ShivanshSeth637 views
Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc... by csegroupvn
Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc...Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc...
Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc...
csegroupvn5 views
Design of machine elements-UNIT 3.pptx by gopinathcreddy
Design of machine elements-UNIT 3.pptxDesign of machine elements-UNIT 3.pptx
Design of machine elements-UNIT 3.pptx
gopinathcreddy33 views
Ansari: Practical experiences with an LLM-based Islamic Assistant by M Waleed Kadous
Ansari: Practical experiences with an LLM-based Islamic AssistantAnsari: Practical experiences with an LLM-based Islamic Assistant
Ansari: Practical experiences with an LLM-based Islamic Assistant
M Waleed Kadous5 views
Searching in Data Structure by raghavbirla63
Searching in Data StructureSearching in Data Structure
Searching in Data Structure
raghavbirla6314 views

Scaling LinkedIn - A Brief History

  • 2. Scaling = replacing all the components of a car while driving it at 100mph “ Via Mike Krieger, “Scaling Instagram”
  • 3. LinkedIn started back in 2003 to “connect to your network for better job opportunities.” It had 2700 members in first week.
  • 4. First week growth guesses from founding team
  • 5. 0M 50M 300M 250M 200M 150M 100M 400M 32M 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 5 400M 350M Fast forward to today...
  • 6. LinkedIn is a global site with over 400 million members Web pages and mobile traffic are served at tens of thousands of queries per second Backend systems serve millions of queries per second LINKEDIN SCALE TODAY
  • 7. 7 How did we get there?
  • 10. DB LEO ● Huge monolithic app called Leo ● Java, JSP, Servlets, JDBC ● Served every page, same SQL database LEO Circa 2003 LINKEDIN’S ORIGINAL ARCHITECTURE
  • 11. So far so good, but two areas to improve: 1. The growing member to member connection graph 2. The ability to search those members
  • 12. ● Needed to live in-memory for top performance ● Used graph traversal queries not suitable for the shared SQL database. ● Different usage profile than other parts of site MEMBER CONNECTION GRAPH
  • 13. MEMBER CONNECTION GRAPH So, a dedicated service was created. LinkedIn’s first service. ● Needed to live in-memory for top performance ● Used graph traversal queries not suitable for the shared SQL database. ● Different usage profile than other parts of site
  • 14. ● Social networks need powerful search ● Lucene was used on top of our member graph MEMBER SEARCH
  • 15. ● Social networks need powerful search ● Lucene was used on top of our member graph MEMBER SEARCH LinkedIn’s second service.
  • 16. LINKEDIN WITH CONNECTION GRAPH AND SEARCH Member GraphLEO DB RPC Circa 2004 Lucene Connection / Profile Updates
  • 17. Getting better, but the single database was under heavy load. Vertically scaling helped, but we needed to offload the read traffic...
  • 18. ● Master/slave concept ● Read-only traffic from replica ● Writes go to main DB ● Early version of Databus kept DBs in sync REPLICA DBs Main DB Replica ReplicaDatabus relay Replica DB
  • 19. ● Good medium term solution ● We could vertically scale servers for a while ● Master DBs have finite scaling limits ● These days, LinkedIn DBs use partitioning REPLICA DBs TAKEAWAYS Main DB Replica ReplicaDatabus relay Replica DB
  • 20. Member GraphLEO RPC Main DB ReplicaReplicaDatabus relay Replica DB Connection Updates R/WR/O Circa 2006 LINKEDIN WITH REPLICA DBs Search Profile Updates
  • 21. As LinkedIn continued to grow, the monolithic application Leo was becoming problematic. Leo was difficult to release, debug, and the site kept going down...
  • 25. Kill LEOIT WAS TIME TO...
  • 26. Public Profile Web App Profile Service LEO Recruiter Web App Yet another Service Extracting services (Java Spring MVC) from legacy Leo monolithic application Circa 2008 on SERVICE ORIENTED ARCHITECTURE
  • 27. ● Goal - create vertical stack of stateless services ● Frontend servers fetch data from many domains, build HTML or JSON response ● Mid-tier services host APIs, business logic ● Data-tier or back-tier services encapsulate data domains Profile Web App Profile Service Profile DB SERVICE ORIENTED ARCHITECTURE
  • 29. Groups Content Service Connections Content Service Profile Content Service Browser / App Frontend Web App Mid-tier Service Mid-tier Service Mid-tier Service Edu Data Service Data Service Hadoop DB Voldemort EXAMPLE MULTI-TIER ARCHITECTURE AT LINKEDIN Kafka
  • 30. PROS ● Stateless services easily scale ● Decoupled domains ● Build and deploy independently CONS ● Ops overhead ● Introduces backwards compatibility issues ● Leads to complex call graphs and fanout SERVICE ORIENTED ARCHITECTURE COMPARISON
  • 31. bash$ eh -e %%prod | awk -F. '{ print $2 }' | sort | uniq | wc -l 756 ● In 2003, LinkedIn had one service (Leo) ● By 2010, LinkedIn had over 150 services ● Today in 2015, LinkedIn has over 750 services SERVICES AT LINKEDIN
  • 32. Getting better, but LinkedIn was experiencing hypergrowth...
  • 34. ● Simple way to reduce load on servers and speed up responses ● Mid-tier caches store derived objects from different domains, reduce fanout ● Caches in the data layer ● We use memcache, couchbase, even Voldemort Frontend Web App Mid-tier Service Cache DB Cache CACHING
  • 35. There are only two hard problems in Computer Science: Cache invalidation, naming things, and off-by-one errors. “ Via Twitter by Kellan Elliott-McCrea and later Jonathan Feinberg
  • 36. CACHING TAKEAWAYS ● Caches are easy to add in the beginning, but complexity adds up over time. ● Over time LinkedIn removed many mid-tier caches because of the complexity around invalidation ● We kept caches closer to data layer
  • 37. CACHING TAKEAWAYS (cont.) ● Services must handle full load - caches improve speed, not permanent load bearing solutions ● We’ll use a low latency solution like Voldemort when appropriate and precompute results
  • 38. LinkedIn’s hypergrowth was extending to the vast amounts of data it collected. Individual pipelines to route that data weren’t scaling. A better solution was needed...
  • 40. KAFKA MOTIVATIONS ● LinkedIn generates a ton of data ○ Pageviews ○ Edits on profile, companies, schools ○ Logging, timing ○ Invites, messaging ○ Tracking ● Billions of events everyday ● Separate and independently created pipelines routed this data
  • 41. A WHOLE LOT OF CUSTOM PIPELINES...
  • 42. A WHOLE LOT OF CUSTOM PIPELINES... As LinkedIn needed to scale, each pipeline needed to scale.
  • 43. Distributed pub-sub messaging platform as LinkedIn’s universal data pipeline KAFKA Kafka Frontend service Frontend service Backend Service DWH Monitoring Analytics HadoopOracle
  • 44. BENEFITS ● Enabled near realtime access to any data source ● Empowered Hadoop jobs ● Allowed LinkedIn to build realtime analytics ● Vastly improved site monitoring capability ● Enabled devs to visualize and track call graphs ● Over 1 trillion messages published per day, 10 million messages per second KAFKA AT LINKEDIN
  • 45. OVER 1 TRILLION PUBLISHED DAILY OVER 1 TRILLION PUBLISHED DAILY
  • 46. Let’s end with the modern years
  • 48. ● Services extracted from Leo or created new were inconsistent and often tightly coupled ● Rest.li was our move to a data model centric architecture ● It ensured a consistent stateless Restful API model across the company. REST.LI
  • 49. ● By using JSON over HTTP, our new APIs supported non-Java-based clients. ● By using Dynamic Discovery (D2), we got load balancing, discovery, and scalability of each service API. ● Today, LinkedIn has 1130+ Rest.li resources and over 100 billion Rest.li calls per day REST.LI (cont.)
  • 51. Rest.li R2/D2 tech stack REST.LI (cont.)
  • 52. LinkedIn’s success with Data infrastructure like Kafka and Databus led to the development of more and more scalable Data infrastructure solutions...
  • 53. ● It was clear LinkedIn could build data infrastructure that enables long term growth ● LinkedIn doubled down on infra solutions like: ○ Storage solutions ■ Espresso, Voldemort, Ambry (media) ○ Analytics solutions like Pinot ○ Streaming solutions ■ Kafka, Databus, and Samza ○ Cloud solutions like Helix and Nuage DATA INFRASTRUCTURE
  • 55. LinkedIn is a global company and was continuing to see large growth. How else to scale?
  • 56. ● Natural progression of horizontally scaling ● Replicate data across many data centers using storage technology like Espresso ● Pin users to geographically close data center ● Difficult but necessary MULTIPLE DATA CENTERS
  • 57. ● Multiple data centers are imperative to maintain high availability. ● You need to avoid any single point of failure not just for each service, but the entire site. ● LinkedIn runs out of three main data centers, additional PoPs around the globe, and more coming online every day... MULTIPLE DATA CENTERS
  • 58. MULTIPLE DATA CENTERS LinkedIn's operational setup as of 2015 (circles represent data centers, diamonds represent PoPs)
  • 59. Of course LinkedIn’s scaling story is never this simple, so what else have we done?
  • 60. ● Each of LinkedIn’s critical systems have undergone their own rich history of scale (graph, search, analytics, profile backend, comms, feed) ● LinkedIn uses Hadoop / Voldemort for insights like People You May Know, Similar profiles, Notable Alumni, and profile browse maps. WHAT ELSE HAVE WE DONE?
  • 61. ● Re-architected frontend approach using ○ Client templates ○ BigPipe ○ Play Framework ● LinkedIn added multiple tiers of proxies using Apache Traffic Server and HAProxy ● We improved the performance of servers with new hardware, advanced system tuning, and newer Java runtimes. WHAT ELSE HAVE WE DONE? (cont.)
  • 62. Scaling sounds easy and quick to do, right?
  • 63. Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law. “ Via  Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid
  • 65. ● Blog version of this slide deck https://engineering.linkedin.com/architecture/brief-history-scaling-linkedin ● Visual story of LinkedIn’s history https://ourstory.linkedin.com/ ● LinkedIn Engineering blog https://engineering.linkedin.com ● LinkedIn Open-Source https://engineering.linkedin.com/open-source ● LinkedIn’s communication system slides which include earliest LinkedIn architecture http://www.slideshare. net/linkedin/linkedins-communication-architecture ● Slides which include earliest LinkedIn data infra work http://www.slideshare.net/r39132/linkedin-data-infrastructure-qcon-london-2012 LEARN MORE
  • 66. ● Project Inversion - internal project to enable developer productivity (trunk based model), faster deploys, unified services http://www.bloomberg.com/bw/articles/2013-04-10/inside-operation-inversion-the-code- freeze-that-saved-linkedin ● LinkedIn’s use of Apache Traffic server http://www.slideshare.net/thenickberry/reflecting-a-year-after-migrating-to-apache-traffic- server ● Multi Data Center - testing fail overs https://www.linkedin.com/pulse/armen-hamstra-how-he-broke-linkedin-got-promoted- angel-au-yeung LEARN MORE (cont.)
  • 67. ● History and motivation around Kafka http://www.confluent.io/blog/stream-data-platform-1/ ● Thinking about streaming solutions as a commit log https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer- should-know-about-real-time-datas-unifying ● Kafka enabling monitoring and alerting http://engineering.linkedin.com/52/autometrics-self-service-metrics-collection ● Kafka enabling real-time analytics (Pinot) http://engineering.linkedin.com/analytics/real-time-analytics-massive-scale-pinot ● Kafka’s current use and future at LinkedIn http://engineering.linkedin.com/kafka/kafka-linkedin-current-and-future ● Kafka processing 1 trillion events per day https://engineering.linkedin.com/apache-kafka/how-we_re-improving-and-advancing- kafka-linkedin LEARN MORE - KAFKA
  • 68. ● Open sourcing Databus https://engineering.linkedin.com/data-replication/open-sourcing-databus-linkedins-low- latency-change-data-capture-system ● Samza streams to help LinkedIn view call graphs https://engineering.linkedin.com/samza/real-time-insights-linkedins-performance-using- apache-samza ● Real-time analytics (Pinot) http://engineering.linkedin.com/analytics/real-time-analytics-massive-scale-pinot ● Introducing Espresso data store http://engineering.linkedin.com/espresso/introducing-espresso-linkedins-hot-new- distributed-document-store LEARN MORE - DATA INFRASTRUCTURE
  • 69. ● LinkedIn’s use of client templates ○ Dust.js http://www.slideshare.net/brikis98/dustjs ○ Profile http://engineering.linkedin.com/profile/engineering-new-linkedin-profile ● Big Pipe on LinkedIn’s homepage http://engineering.linkedin.com/frontend/new-technologies-new-linkedin-home-page ● Play Framework ○ Introduction at LinkedIn https://engineering.linkedin. com/play/composable-and-streamable-play-apps ○ Switching to non-block asynchronous model https://engineering.linkedin.com/play/play-framework-async-io-without-thread-pool- and-callback-hell LEARN MORE - FRONTEND TECH
  • 70. ● Introduction to Rest.li and how it helps LinkedIn scale http://engineering.linkedin.com/architecture/restli-restful-service-architecture-scale ● How Rest.li expanded across the company http://engineering.linkedin.com/restli/linkedins-restli-moment LEARN MORE - REST.LI
  • 71. ● JVM memory tuning http://engineering.linkedin.com/garbage-collection/garbage-collection-optimization-high- throughput-and-low-latency-java-applications ● System tuning http://engineering.linkedin.com/performance/optimizing-linux-memory-management- low-latency-high-throughput-databases ● Optimizing JVM tuning automatically https://engineering.linkedin.com/java/optimizing-java-cms-garbage-collections-its- difficulties-and-using-jtune-solution LEARN MORE - SYSTEM TUNING
  • 72. LinkedIn continues to grow quickly and there’s still a ton of work we can do to improve. We’re working on problems that very few ever get to solve - come join us! WE’RE HIRING