SlideShare a Scribd company logo
1 of 41
Download to read offline
Data Infrastructure @ LinkedIn
Sid Anand
QCon London 2012

@r39132




                                 1
About Me
Current Life…
  LinkedIn                        *

      Web / Software Engineering
           Search, Network, and Analytics (SNA)
               Distributed Data Systems (DDS)
                   Me

In a Previous Life…
  Netflix, Cloud Database Architect
  eBay, Web Development, Research Lab, & Search
    Engine                                              ***




And Many Years Prior…
  Studying Distributed Systems at Cornell University


                                       @r39132                2
Our mission
Connect the world’s professionals to make
  them more productive and successful




                  @r39132                   3
The world’s largest professional network
Over 60% of members are now international



                                                                    75%
                                                                                     **

                                     150M+          *



                                             90                     Fortune 100 Companies
                                                                    use LinkedIn to hire



                                                                    >2M
                                                                                      ***




                                                                    Company Pages
                                      55




                               32
                                                                    16
                                                                    Languages



                                                                  ~4.2B
                        17                                                                   ***

                  8
  2      4
                                                                  Professional searches in 2011
 2004   2005    2006   2007   2008   2009    2010                                    *as of February 9, 2012
                                                                                 **as of September 30, 2011
               LinkedIn Members (Millions)                                       ***as of December 31, 2011




                                                        @r39132                                          4
Other Company Facts

•  Headquartered in Mountain View, Calif., with offices around the world!
                                *




•  As of December 31, 2011, LinkedIn has 2,116 full-time employees located
   around the world.

    •  Currently around 650 people work in Engineering
        •  400 in Web/Software Engineering
            •  Plan to add another 200 in 2012                               ***

        •  250 in Operations



                                                                     *as of February 9, 2012
                                                                 **as of September 30, 2011
                                                                 ***as of December 31, 2011




                                    @r39132                                              5
Agenda

  Company Overview
  Architecture
   –  Data Infrastructure Overview
   –  Technology Spotlight
           Oracle
           Voldemort
           DataBus
           Kafka
  Q & A




                                @r39132   6
LinkedIn : Architecture	


Overview

•  Our site runs primarily on Java, with some use of Scala for specific
   infrastructure

•  What runs on Scala?
   •  Network Graph Service
   •  Kafka

•  Most of our services run on Apache + Jetty




                                 @r39132                             7
LinkedIn : Architecture	


                                                                                     A web page requests
                                                                                      information A and B



                                                             Presentation Tier       A thin layer focused on
                                                                                      building the UI. It assembles
                                                                                      the page by making parallel
                                                                                      requests to BST services


      A    A          B         B         C        C         Business Service Tier   Encapsulates business
                                                                                      logic. Can call other BST
                                                                                      clusters and its own DST
                                                                                      cluster.
      A    A          B         B         C        C         Data Service Tier       Encapsulates DAL logic and
                                                                                      concerned with one Oracle
                                                                                      Schema.

                                                             Data Infrastructure     Concerned with the
 Master   Slave               Master           Master
                                                                                      persistent storage of and
                                                                                      easy access to data
                                       Memcached
                  Voldemort



                                                   @r39132                                                  8
LinkedIn : Architecture	


                                                                                     A web page requests
                                                                                      information A and B



                                                             Presentation Tier       A thin layer focused on
                                                                                      building the UI. It assembles
                                                                                      the page by making parallel
                                                                                      requests to BST services


      A    A          B         B         C        C         Business Service Tier   Encapsulates business
                                                                                      logic. Can call other BST
                                                                                      clusters and its own DST
                                                                                      cluster.
      A    A          B         B         C        C         Data Service Tier       Encapsulates DAL logic and
                                                                                      concerned with one Oracle
                                                                                      Schema.

                                                             Data Infrastructure     Concerned with the
 Master   Slave               Master           Master
                                                                                      persistent storage of and
                                                                                      easy access to data
                                       Memcached
                  Voldemort



                                                   @r39132                                                  9
LinkedIn : Data Infrastructure Technologies	

•    Database Technologies
      •  Oracle
      •  Voldemort                    *

      •  Espresso

•    Data Replication Technologies
      •  Kafka
      •  DataBus

•    Search Technologies
      •  Zoie – real-time search and indexing with Lucene
      •  Bobo – faceted search library for Lucene                              ***

      •  SenseiDB – fast, real-time, faceted, KV and full-text Search Engine and more




                                          @r39132                                10
LinkedIn : Data Infrastructure Technologies	

This talk will focus on a few of the key technologies below!

•    Database Technologies           *
      •  Oracle
      •  Voldemort
      •  Espresso – A new K-V store under development

•    Data Replication Technologies
      •  Kafka
      •  DataBus

•    Search Technologies                                                       ***

      •  Zoie – real-time search and indexing with Lucene
      •  Bobo – faceted search library for Lucene
      •  SenseiDB – fast, real-time, faceted, KV and full-text Search Engine and more




                                         @r39132                                  11
LinkedIn Data Infrastructure Technologies

Oracle: Source of Truth for User-Provided Data




                              @r39132            12
Oracle : Overview	

Oracle
•  All user-provided data is stored in Oracle – our current source of truth
•  About 50 Schemas running on tens of physical instances
                                       *

•  With our user base and traffic growing at an accelerating pace, so how do we
   scale Oracle for user-provided data?

Scaling Reads
•  Oracle Slaves (c.f. DSC)
•  Memcached
•  Voldemort – for key-value lookups

                                                                            ***


Scaling Writes
•  Move to more expensive hardware or replace Oracle with something else




                                       @r39132                                    13
Oracle : Overview – Data Service Context	

 Scaling Oracle Reads using DSC



   DSC uses a token (e.g. cookie) to ensure that a reader always
    sees his or her own writes immediately

     –  If I update my own status, it is okay if you don’t see the
        change for a few minutes, but I have to see it immediately




                                @r39132                              14
Oracle : Overview – How DSC Works?	

    When a user writes data to the master, the DSC token (for that
     data domain) is updated with a timestamp

    When the user reads data, we first attempt to read from a
     replica (a.k.a. slave) database

    If the data in the slave is older than the data in the DSC token,
     we read from the Master instead




                                 @r39132                             15
LinkedIn Data Infrastructure Technologies
Voldemort: Highly-Available Distributed Data Store




                              @r39132                16
Voldemort : Overview	

•    A distributed, persistent key-value store influenced by the AWS Dynamo paper

•    Key Features of Dynamo
        Highly Scalable, Available, and Performant
        Achieves this via Tunable Consistency
          •  For higher consistency, the user accepts lower availability, scalability, and
              performance, and vice-versa

        Provides several self-healing mechanisms when data does become inconsistent
          •  Read Repair
                 Repairs value for a key when the key is looked up/read
          •  Hinted Handoff
                 Buffers value for a key that wasn’t successfully written, then writes it later
          •  Anti-Entropy Repair
                 Scans the entire data set on a node and fixes it

        Provides means to detect node failure and a means to recover from node failure
          •  Failure Detection
          •  Bootstrapping New Nodes

                                                @r39132                                            17
Voldemort : Overview	

                        API                                    Layered, Pluggable Architecture
     VectorClock<V> get (K key)
                                                                             Client
     put (K key, VectorClock<V> value)
     applyUpdate(UpdateAction action, int retries)                         Client API
                                                                      Conflict Resolution
Voldemort-specific Features                                               Serialization
  Implements a layered, pluggable                                     Repair Mechanism
   architecture
                                                                        Failure Detector
                                                                            Routing
  Each layer implements a common interface
   (c.f. API). This allows us to replace or
   remove implementations at any layer                                       Server

                                                                  Repair Mechanism
    •  Pluggable data storage layer                                Failure Detector         Admin
           BDB JE, Custom RO storage,
                                                                       Routing
            etc…
                                                                         Storage Engine

    •  Pluggable routing supports
           Single or Multi-datacenter routing



                                                     @r39132                                  18
Voldemort : Overview	

                                                 Layered, Pluggable Architecture
                                                               Client
Voldemort-specific Features
                                                             Client API
                                                        Conflict Resolution
•  Supports Fat client or Fat Server                        Serialization
                                                         Repair Mechanism
    •  Repair Mechanism + Failure                         Failure Detector

       Detector + Routing can run on                          Routing

       server or client
                                                               Server


•  LinkedIn currently runs Fat Client, but we       Repair Mechanism
   would like to move this to a Fat Server           Failure Detector         Admin
   Model                                                 Routing
                                                           Storage Engine




                                       @r39132                                  19
Where Does LinkedIn use
      Voldemort?




          @r39132         20
Voldemort : Usage Patterns @ LinkedIn	



 2 Usage-Patterns

   Read-Write Store
     –  Uses BDB JE for the storage engine
     –  50% of Voldemort Stores (aka Tables) are RW


   Read-only Store
     –  Uses a custom Read-only format
     –  50% of Voldemort Stores (aka Tables) are RO


   Let’s look at the RO Store


                                 @r39132              21
Voldemort : RO Store Usage at LinkedIn	

     People You May Know	

     Viewers of this profile also viewed	

   Related Searches	





Events you may be interested in	

          LinkedIn Skills	

          Jobs you may be
                                                                          interested in	





                                               @r39132                                       22
Voldemort : Usage Patterns @ LinkedIn	

RO Store Usage Pattern
1.    Use Hadoop to build a model

2.    Voldemort loads the output of Hadoop

3.    Voldemort serves fast key-value look-ups on the site
      –    e.g. For key=“Sid Anand”, get all the people that “Sid Anand” may know!
      –    e.g. For key=“Sid Anand”, get all the jobs that “Sid Anand” may be interested in!




                                               @r39132                                         23
How Do The Voldemort RO
     Stores Perform?




          @r39132         24
Voldemort : RO Store Performance : TP vs. Latency	


                                                             ●       MySQL ● Voldemort
                                                   ●   MySQL ● Voldemort ● Voldemort
                                                                      ●   MySQL
                                                       median MySQL ● Voldemort 99th percentile
                                                               ●

                                           median            median          ●     99th percentile ●percentile
                                                                                                     99th
                                                       median                                  99th percentile
                                   3.5                           ●          ●    ●
                                                                                250          ●                ●
                                                             ●              ●                            ●
                     3.5                3.5                     ●
                                                                     250 ●      ●                       ●
                                  3.5
                                   3.0          ●                                   250 ●       ●
                                                                                                   ● ●
                                                            ●
                                                                   ●           250     ●
                                                                                           ●               ●
                     3.0                                                      ● 200 ● ●             ● ●
                                                                                                       ● ●
                                        3.0                                ●       ●       ●
                                                                                               ●
                                                                                                  ● ●
                                  3.0                                                    ●
                                latency (ms)


                                   2.5                            ●
                                                                     200
                                                                        ●            ●         ●
                                                           ●                        200
                                                                               200
      latency (ms)




                     2.5
                             latency (ms)


                                        2.5         ●    ●             ●   ●
                            latency (ms)




                                  2.5
                                   2.0        ●
                                                   ●
                                                      ● ●       ●
                                                                 ●
                                                                       ●        150
                                                          ●
                     2.0                2.0● ●
                                         ●●               ● ●        150            150
                                  2.0 ●
                                   1.5            ●
                                                       ●
                                                     ● ●                       150                                  ●
                                                                                                                     ●
                            ●   ●              ● ● ●                            100
                                         ●● ●                                                                    ●
                     1.5                1.5
                                         ●      ●
                                                                                                            ●
                                                                                                           ●●            ●
                                                                                                                          ●
                                  1.5
                                   1.0               ●
                                                          ●          100            100                            ●
                                                                                                                    ●
                            ●
                                   ●
                                               ●
                                                     ●
                                                                               100                    ●●           ●
                     1.0                 ●
                                        1.0 ●                                    50 ● ● ● ● ●              ●
                                                                                                                ●
                                                                                                                ●
                                  1.0
                                   0.5                                               ●                 ●
                                                                                                       ●
                                                                                                           ●
                                                                       50 ●     ●    50 ●  ●      ●
                                                                                                  ●
                     0.5                0.5                                     50 ●
                                  0.5
                                   0.0                                             0
                     0.0           0.0                   0           0
                               0.0 100 200 300 400 500 600 700 0 100 200 300 400 500 600 700
                           100 200 300 400 500 600 700 throughput 300 400 500 600 700 500 600 700
                                                           100 200
                                       100 200 300 400 500 600 700 (qps) 200 300 400
                                                                       100
                                   100 200 300 400 500 600 700     100 200 300 400 500 600 700
                                                     throughput (qps) (qps)
                                                                throughput (qps)
                                                            throughput

                                                       100 GB data, 24 GB RAM 	


                                                                    @r39132                                                   25
LinkedIn Data Infrastructure Solutions
Databus : Timeline-Consistent Change Data Capture




                               @r39132              26
Where Does LinkedIn use
       DataBus?




          @r39132         27
DataBus : Use-Cases @ LinkedIn	




                  Updates
                                   Standard      Search       Graph        Read
                                    ization       Index       Index       Replicas




                Oracle
                                              Data Change Events



A user updates his profile with skills and position history. He also accepts a connection

•    The write is made to an Oracle Master and DataBus replicates:
•    the profile change to the Standardization service
         E.G. the many forms of IBM are canonicalized for search-friendliness and
          recommendation-friendliness
•    the profile change to the Search Index service
         Recruiters can find you immediately by new keywords
•    the connection change to the Graph Index service
         The user can now start receiving feed updates from his new connections immediately


                                              @r39132                                     28
DataBus : Architecture	

                                            DataBus consists of 2 services
        Capture          Relay              •  Relay Services
   DB   Changes
                      Event Win                 •  Sharded
                                                •  Maintain an in-memory buffer per
                     On-line                       shard
                    Changes
                                                •  Each shard polls Oracle and then
                                                   deserializes transactions into Avro
                     Bootstrap
                                            •  Bootstrap Service
                          DB                    •  Picks up online changes as they
                                                   appear in the Relay
                                                •  Supports 2 types of operations
                                                   from clients
                                                       If a client falls behind and
                                                        needs records older than what
                                                        the relay has, Bootstrap can
                                                        send consolidated deltas!
                                                       If a new client comes on line,
                                                        Bootstrap can send a
                                                        consistent snapshot


                                  @r39132                                        29
DataBus : Architecture	


                                                                   Client
        Capture          Relay                                     Consumer 1




                                                      Client Lib
                                            On-line




                                                      Databus
   DB   Changes                             Changes
                      Event Win                                    Consumer n
                     On-line
                                               ated
                    Changes              solid
                                     Con         eT
                                            Sinc
                                      Delta                        Client
                     Bootstrap
                                                                   Consumer 1




                                                      Client Lib
                                      Consistent




                                                      Databus
                                     Snapshot at U                 Consumer n
                          DB

                                     Guarantees
                                       Transactional semantics
                                       In-commit-order Delivery
                                       At-least-once delivery
                                       Durability (by data source)
                                       High-availability and reliability
                                       Low latency
                                  @r39132                                       30
DataBus : Architecture - Bootstrap	

    Generate consistent snapshots and consolidated deltas
     during continuous updates with long-running queries

                                                                     Client
                              Read
                                                                      Consumer 1




                                                        Client Lib
                                                        Databus
                              Online
            Relay             Changes
                                                                      Consumer n
          Event Win

                 Read
                 Changes     Bootstrap server                        Bootstrap

                                            Read
           Log Writer                                                Server
                                        recent events

                           Replay
          Log Storage      events                       Snapshot Storage

                               Log Applier


                                    @r39132                                        31
LinkedIn Data Infrastructure Solutions
Kafka: High-Volume Low-Latency Messaging System




                               @r39132            32
Kafka : Usage at LinkedIn	

Where as DataBus is used for Database change capture and
replication, Kafka is used for application-level data streams

Examples:
•  End-user Action Tracking (a.k.a. Web Tracking) of
    •  Emails opened
    •  Pages seen
    •  Links followed
    •  Executing Searches

•    Operational Metrics
     •  Network & System metrics such as
         •  TCP metrics (connection resets, message resends, etc…)
         •  System metrics (iops, CPU, load average, etc…)

                                 @r39132                         33
Kafka : Overview	

                                   Broker Tier
     WebTier                                                                          Consumers

            Push           Sequential write            sendfile         Pull
            Events                                                      Events                   Iterator 1




                                                                                    Client Lib
                                        Topic 1




                                                                                     Kafka
           100 MB/sec                                               200 MB/sec
                                        Topic 2
                                                                                                 Iterator n
                                        Topic N

                                                                                                 Topic  Offset


                                              Topic, Partition                        Offset
                                              Ownership
                                                                  Zookeeper           Management



Features                           Guarantees                      Scale
     Pub/Sub                          At least once delivery          Billions of Events
     Batch Send/Receive               Very high throughput            TBs per day
     System Decoupling                Low latency                     Inter-colo: few seconds
                                       Durability                      Typical retention: weeks
                                       Horizontally Scalable

                                                      @r39132                                                 34
Kafka : Overview	

Key Design Choices

•  When reading from a file and sending to network socket, we typically incur 4 buffer copies and
   2 OS system calls
       Kafka leverages a sendFile API to eliminate 2 of the buffer copies and 1 of the system
        calls

•  No double-buffering of messages - we rely on the OS page cache and do not store a copy of
   the message in the JVM
       Less pressure on memory and GC

       If the Kafka process is restarted on a machine, recently accessed messages are still in
        the page cache, so we get the benefit of a warm start

•  Kafka doesn't keep track of which messages have yet to be consumed -- i.e. no book keeping
   overhead
       Instead, messages have time-based SLA expiration -- after 7 days, messages are deleted




                                              @r39132                                        35
How Does Kafka Perform?




          @r39132         36
Kafka : Performance : Throughput vs. Latency	



                                     (100 topics, 1 producer, 1 broker)
                           250

                           200
  Consumer latency in ms




                           150

                           100

                            50

                             0
                                 0   20          40          60           80   100

                                          Producer throughput in MB/sec

                                                   @r39132                       37
Kafka : Performance : Linear Incremental Scalability	




                                    (10 topics, broker flush interval 100K)
                          400
                                                                                   381
                          350

                          300
     Throughput in MB/s




                                                                 293
                          250

                          200                      190
                          150

                          100        101

                           50

                            0
                                1 broker     2 brokers      3 brokers         4 brokers

                                                 @r39132                                  38
Kafka : Performance : Resilience as Messages Pile Up 	



                                                (1	
  topic,	
  broker	
  flush	
  interval	
  10K)	
  
                                       200000

                                       160000
       Throughput	
  in	
  msg/s	
  




                                       120000

                                       80000

                                       40000

                                           0
                                                 10
                                                      105
                                                            199
                                                                  294
                                                                        388
                                                                              473
                                                                                    567
                                                                                          662
                                                                                                756
                                                                                                      851
                                                                                                            945
                                                                                                                  1039
                                                            Unconsumed data in GB

                                                                         @r39132                                         39
Acknowledgments	

 Presentation & Content
   Chavdar Botev (DataBus)       @cbotev
   Roshan Sumbaly (Voldemort)    @rsumbaly
   Neha Narkhede (Kafka)         @nehanarkhede



 Development Team
     Aditya Auradkar, Chavdar Botev, Shirshanka Das, Dave DeMaagd, Alex
     Feinberg, Phanindra Ganti, Lei Gao, Bhaskar Ghosh, Kishore
     Gopalakrishna, Brendan Harris, Joel Koshy, Kevin Krawez, Jay Kreps, Shi
     Lu, Sunil Nagaraj, Neha Narkhede, Sasha Pachev, Igor Perisic, Lin Qiao,
     Tom Quiggle, Jun Rao, Bob Schulman, Abraham Sebastian, Oliver Seeliger,
     Adam Silberstein, Boris Skolnick, Chinmay Soman, Roshan Sumbaly, Kapil
     Surlaker, Sajid Topiwala, Balaji Varadarajan, Jemiah Westerman, Zach
     White, David Zhang, and Jason Zhang




                                   @r39132                                40
y Questions?




   @r39132     41

More Related Content

What's hot

Azure data factory
Azure data factoryAzure data factory
Azure data factoryDavid Giard
 
Data Mesh in Azure using Cloud Scale Analytics (WAF)
Data Mesh in Azure using Cloud Scale Analytics (WAF)Data Mesh in Azure using Cloud Scale Analytics (WAF)
Data Mesh in Azure using Cloud Scale Analytics (WAF)Nathan Bijnens
 
Modularized ETL Writing with Apache Spark
Modularized ETL Writing with Apache SparkModularized ETL Writing with Apache Spark
Modularized ETL Writing with Apache SparkDatabricks
 
Apache Spark Listeners: A Crash Course in Fast, Easy Monitoring
Apache Spark Listeners: A Crash Course in Fast, Easy MonitoringApache Spark Listeners: A Crash Course in Fast, Easy Monitoring
Apache Spark Listeners: A Crash Course in Fast, Easy MonitoringDatabricks
 
Building Serverless ETL Pipelines with AWS Glue - AWS Summit Sydney 2018
Building Serverless ETL Pipelines with AWS Glue - AWS Summit Sydney 2018Building Serverless ETL Pipelines with AWS Glue - AWS Summit Sydney 2018
Building Serverless ETL Pipelines with AWS Glue - AWS Summit Sydney 2018Amazon Web Services
 
Massive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeMassive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeDatabricks
 
Moving to Databricks & Delta
Moving to Databricks & DeltaMoving to Databricks & Delta
Moving to Databricks & DeltaDatabricks
 
Data cleansing and prep with synapse data flows
Data cleansing and prep with synapse data flowsData cleansing and prep with synapse data flows
Data cleansing and prep with synapse data flowsMark Kromer
 
Delta from a Data Engineer's Perspective
Delta from a Data Engineer's PerspectiveDelta from a Data Engineer's Perspective
Delta from a Data Engineer's PerspectiveDatabricks
 
Power automate a workflow automation platform
Power automate a  workflow automation platform Power automate a  workflow automation platform
Power automate a workflow automation platform Amit Kumawat
 
A Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and HudiA Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and HudiDatabricks
 
Data Lake Overview
Data Lake OverviewData Lake Overview
Data Lake OverviewJames Serra
 
Azure Data Factory V2; The Data Flows
Azure Data Factory V2; The Data FlowsAzure Data Factory V2; The Data Flows
Azure Data Factory V2; The Data FlowsThomas Sykes
 
Getting started with power virtual agent
Getting started with power virtual agentGetting started with power virtual agent
Getting started with power virtual agentHugo Bernier
 
Delta lake and the delta architecture
Delta lake and the delta architectureDelta lake and the delta architecture
Delta lake and the delta architectureAdam Doyle
 
Authorization Enterprise Design Pattern
Authorization Enterprise Design PatternAuthorization Enterprise Design Pattern
Authorization Enterprise Design PatternNick Bogden
 

What's hot (20)

Azure data factory
Azure data factoryAzure data factory
Azure data factory
 
Data Mesh in Azure using Cloud Scale Analytics (WAF)
Data Mesh in Azure using Cloud Scale Analytics (WAF)Data Mesh in Azure using Cloud Scale Analytics (WAF)
Data Mesh in Azure using Cloud Scale Analytics (WAF)
 
Modularized ETL Writing with Apache Spark
Modularized ETL Writing with Apache SparkModularized ETL Writing with Apache Spark
Modularized ETL Writing with Apache Spark
 
Apache Spark Listeners: A Crash Course in Fast, Easy Monitoring
Apache Spark Listeners: A Crash Course in Fast, Easy MonitoringApache Spark Listeners: A Crash Course in Fast, Easy Monitoring
Apache Spark Listeners: A Crash Course in Fast, Easy Monitoring
 
Building Serverless ETL Pipelines with AWS Glue - AWS Summit Sydney 2018
Building Serverless ETL Pipelines with AWS Glue - AWS Summit Sydney 2018Building Serverless ETL Pipelines with AWS Glue - AWS Summit Sydney 2018
Building Serverless ETL Pipelines with AWS Glue - AWS Summit Sydney 2018
 
Massive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeMassive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta Lake
 
Moving to Databricks & Delta
Moving to Databricks & DeltaMoving to Databricks & Delta
Moving to Databricks & Delta
 
Data engineering
Data engineeringData engineering
Data engineering
 
SharePoint as a Document Management System (DMS)
SharePoint as a Document Management System (DMS)SharePoint as a Document Management System (DMS)
SharePoint as a Document Management System (DMS)
 
Data cleansing and prep with synapse data flows
Data cleansing and prep with synapse data flowsData cleansing and prep with synapse data flows
Data cleansing and prep with synapse data flows
 
Delta from a Data Engineer's Perspective
Delta from a Data Engineer's PerspectiveDelta from a Data Engineer's Perspective
Delta from a Data Engineer's Perspective
 
Power automate a workflow automation platform
Power automate a  workflow automation platform Power automate a  workflow automation platform
Power automate a workflow automation platform
 
A Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and HudiA Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and Hudi
 
Redshift overview
Redshift overviewRedshift overview
Redshift overview
 
What is ETL?
What is ETL?What is ETL?
What is ETL?
 
Data Lake Overview
Data Lake OverviewData Lake Overview
Data Lake Overview
 
Azure Data Factory V2; The Data Flows
Azure Data Factory V2; The Data FlowsAzure Data Factory V2; The Data Flows
Azure Data Factory V2; The Data Flows
 
Getting started with power virtual agent
Getting started with power virtual agentGetting started with power virtual agent
Getting started with power virtual agent
 
Delta lake and the delta architecture
Delta lake and the delta architectureDelta lake and the delta architecture
Delta lake and the delta architecture
 
Authorization Enterprise Design Pattern
Authorization Enterprise Design PatternAuthorization Enterprise Design Pattern
Authorization Enterprise Design Pattern
 

Viewers also liked

Databus - LinkedIn's Change Data Capture Pipeline
Databus - LinkedIn's Change Data Capture PipelineDatabus - LinkedIn's Change Data Capture Pipeline
Databus - LinkedIn's Change Data Capture PipelineSunil Nagaraj
 
Redis: Lua scripts - a primer and use cases
Redis: Lua scripts - a primer and use casesRedis: Lua scripts - a primer and use cases
Redis: Lua scripts - a primer and use casesRedis Labs
 
LinkedIn - A Professional Network built with Java Technologies and Agile Prac...
LinkedIn - A Professional Network built with Java Technologies and Agile Prac...LinkedIn - A Professional Network built with Java Technologies and Agile Prac...
LinkedIn - A Professional Network built with Java Technologies and Agile Prac...LinkedIn
 
Shopzilla - Performance By Design
Shopzilla - Performance By DesignShopzilla - Performance By Design
Shopzilla - Performance By DesignTim Morrow
 
Let's Go (golang)
Let's Go (golang)Let's Go (golang)
Let's Go (golang)상욱 송
 
Openstack Rally - Benchmark as a Service. Openstack Meetup India. Ananth/Rahul.
Openstack Rally - Benchmark as a Service. Openstack Meetup India. Ananth/Rahul.Openstack Rally - Benchmark as a Service. Openstack Meetup India. Ananth/Rahul.
Openstack Rally - Benchmark as a Service. Openstack Meetup India. Ananth/Rahul.Rahul Krishna Upadhyaya
 
Hancom MDS Conference - KAKAO DEVOPS Practice (카카오 스토리의 Devops 사례)
Hancom MDS Conference - KAKAO DEVOPS Practice (카카오 스토리의 Devops 사례)Hancom MDS Conference - KAKAO DEVOPS Practice (카카오 스토리의 Devops 사례)
Hancom MDS Conference - KAKAO DEVOPS Practice (카카오 스토리의 Devops 사례)knight1128
 
Scalability, Availability & Stability Patterns
Scalability, Availability & Stability PatternsScalability, Availability & Stability Patterns
Scalability, Availability & Stability PatternsJonas Bonér
 
facebook architecture for 600M users
facebook architecture for 600M usersfacebook architecture for 600M users
facebook architecture for 600M usersJongyoon Choi
 
Big Data in Real-Time at Twitter
Big Data in Real-Time at TwitterBig Data in Real-Time at Twitter
Big Data in Real-Time at Twitternkallen
 

Viewers also liked (13)

Databus - LinkedIn's Change Data Capture Pipeline
Databus - LinkedIn's Change Data Capture PipelineDatabus - LinkedIn's Change Data Capture Pipeline
Databus - LinkedIn's Change Data Capture Pipeline
 
Redis: Lua scripts - a primer and use cases
Redis: Lua scripts - a primer and use casesRedis: Lua scripts - a primer and use cases
Redis: Lua scripts - a primer and use cases
 
Scala Parallel Collections
Scala Parallel CollectionsScala Parallel Collections
Scala Parallel Collections
 
LinkedIn - A Professional Network built with Java Technologies and Agile Prac...
LinkedIn - A Professional Network built with Java Technologies and Agile Prac...LinkedIn - A Professional Network built with Java Technologies and Agile Prac...
LinkedIn - A Professional Network built with Java Technologies and Agile Prac...
 
Shopzilla - Performance By Design
Shopzilla - Performance By DesignShopzilla - Performance By Design
Shopzilla - Performance By Design
 
Let's Go (golang)
Let's Go (golang)Let's Go (golang)
Let's Go (golang)
 
ScalaBlitz
ScalaBlitzScalaBlitz
ScalaBlitz
 
Global Netflix Platform
Global Netflix PlatformGlobal Netflix Platform
Global Netflix Platform
 
Openstack Rally - Benchmark as a Service. Openstack Meetup India. Ananth/Rahul.
Openstack Rally - Benchmark as a Service. Openstack Meetup India. Ananth/Rahul.Openstack Rally - Benchmark as a Service. Openstack Meetup India. Ananth/Rahul.
Openstack Rally - Benchmark as a Service. Openstack Meetup India. Ananth/Rahul.
 
Hancom MDS Conference - KAKAO DEVOPS Practice (카카오 스토리의 Devops 사례)
Hancom MDS Conference - KAKAO DEVOPS Practice (카카오 스토리의 Devops 사례)Hancom MDS Conference - KAKAO DEVOPS Practice (카카오 스토리의 Devops 사례)
Hancom MDS Conference - KAKAO DEVOPS Practice (카카오 스토리의 Devops 사례)
 
Scalability, Availability & Stability Patterns
Scalability, Availability & Stability PatternsScalability, Availability & Stability Patterns
Scalability, Availability & Stability Patterns
 
facebook architecture for 600M users
facebook architecture for 600M usersfacebook architecture for 600M users
facebook architecture for 600M users
 
Big Data in Real-Time at Twitter
Big Data in Real-Time at TwitterBig Data in Real-Time at Twitter
Big Data in Real-Time at Twitter
 

Similar to LinkedIn Data Infrastructure (QCon London 2012)

Document Classification using DMX in SQL Server Analysis Services
Document Classification using DMX in SQL Server Analysis ServicesDocument Classification using DMX in SQL Server Analysis Services
Document Classification using DMX in SQL Server Analysis ServicesMark Tabladillo
 
Migrating Netflix from Datacenter Oracle to Global Cassandra
Migrating Netflix from Datacenter Oracle to Global CassandraMigrating Netflix from Datacenter Oracle to Global Cassandra
Migrating Netflix from Datacenter Oracle to Global CassandraAdrian Cockcroft
 
Sap business objects BI4.0 reporting presentation
Sap business objects BI4.0 reporting presentationSap business objects BI4.0 reporting presentation
Sap business objects BI4.0 reporting presentationshaktell2
 
(.Net Portfolio) Td Rodda
(.Net Portfolio) Td Rodda(.Net Portfolio) Td Rodda
(.Net Portfolio) Td Roddatdrodda
 
Keat Resume 2012b
Keat Resume 2012bKeat Resume 2012b
Keat Resume 2012bmkeating1
 
IBM THINK 2019 - A Sharing Economy for Analytics: SQL Query in IBM Cloud
IBM THINK 2019 - A Sharing Economy for Analytics: SQL Query in IBM CloudIBM THINK 2019 - A Sharing Economy for Analytics: SQL Query in IBM Cloud
IBM THINK 2019 - A Sharing Economy for Analytics: SQL Query in IBM CloudTorsten Steinbach
 
AWS Customer Presentation: Freie Univerisitat - Berlin Summit 2012
AWS Customer Presentation: Freie Univerisitat - Berlin Summit 2012AWS Customer Presentation: Freie Univerisitat - Berlin Summit 2012
AWS Customer Presentation: Freie Univerisitat - Berlin Summit 2012Amazon Web Services
 
AWS Summit Berlin 2012 Talk on Web Data Commons
AWS Summit Berlin 2012 Talk on Web Data CommonsAWS Summit Berlin 2012 Talk on Web Data Commons
AWS Summit Berlin 2012 Talk on Web Data CommonsHannes Mühleisen
 
The Perfect Storm: The Impact of Analytics, Big Data and Analytics
The Perfect Storm: The Impact of Analytics, Big Data and AnalyticsThe Perfect Storm: The Impact of Analytics, Big Data and Analytics
The Perfect Storm: The Impact of Analytics, Big Data and AnalyticsInside Analysis
 
Leveraging PowerPivot
Leveraging PowerPivotLeveraging PowerPivot
Leveraging PowerPivotDan English
 
An overview of Microsoft data mining technology
An overview of Microsoft data mining technologyAn overview of Microsoft data mining technology
An overview of Microsoft data mining technologyMark Tabladillo
 
Secrets of Enterprise Data Mining
Secrets of Enterprise Data Mining Secrets of Enterprise Data Mining
Secrets of Enterprise Data Mining Mark Tabladillo
 
Domino app dev competitive advantage final
Domino app dev competitive advantage finalDomino app dev competitive advantage final
Domino app dev competitive advantage finalJohn Head
 
Agile NoSQL With XRX
Agile NoSQL With XRXAgile NoSQL With XRX
Agile NoSQL With XRXDATAVERSITY
 
Transitioning a Full Enterprise to Cloud in 10 Months - Cloud Expo
Transitioning a Full Enterprise to Cloud in 10 Months - Cloud ExpoTransitioning a Full Enterprise to Cloud in 10 Months - Cloud Expo
Transitioning a Full Enterprise to Cloud in 10 Months - Cloud Exposjdeluca
 
Netflix Velocity Conference 2011
Netflix Velocity Conference 2011Netflix Velocity Conference 2011
Netflix Velocity Conference 2011Adrian Cockcroft
 

Similar to LinkedIn Data Infrastructure (QCon London 2012) (20)

Document Classification using DMX in SQL Server Analysis Services
Document Classification using DMX in SQL Server Analysis ServicesDocument Classification using DMX in SQL Server Analysis Services
Document Classification using DMX in SQL Server Analysis Services
 
Introduction inbox v2.0
Introduction inbox v2.0Introduction inbox v2.0
Introduction inbox v2.0
 
Netflix in the cloud 2011
Netflix in the cloud 2011Netflix in the cloud 2011
Netflix in the cloud 2011
 
Migrating Netflix from Datacenter Oracle to Global Cassandra
Migrating Netflix from Datacenter Oracle to Global CassandraMigrating Netflix from Datacenter Oracle to Global Cassandra
Migrating Netflix from Datacenter Oracle to Global Cassandra
 
Sap business objects BI4.0 reporting presentation
Sap business objects BI4.0 reporting presentationSap business objects BI4.0 reporting presentation
Sap business objects BI4.0 reporting presentation
 
10g forms
10g forms10g forms
10g forms
 
(.Net Portfolio) Td Rodda
(.Net Portfolio) Td Rodda(.Net Portfolio) Td Rodda
(.Net Portfolio) Td Rodda
 
Keat Resume 2012b
Keat Resume 2012bKeat Resume 2012b
Keat Resume 2012b
 
IBM THINK 2019 - A Sharing Economy for Analytics: SQL Query in IBM Cloud
IBM THINK 2019 - A Sharing Economy for Analytics: SQL Query in IBM CloudIBM THINK 2019 - A Sharing Economy for Analytics: SQL Query in IBM Cloud
IBM THINK 2019 - A Sharing Economy for Analytics: SQL Query in IBM Cloud
 
AWS Customer Presentation: Freie Univerisitat - Berlin Summit 2012
AWS Customer Presentation: Freie Univerisitat - Berlin Summit 2012AWS Customer Presentation: Freie Univerisitat - Berlin Summit 2012
AWS Customer Presentation: Freie Univerisitat - Berlin Summit 2012
 
AWS Summit Berlin 2012 Talk on Web Data Commons
AWS Summit Berlin 2012 Talk on Web Data CommonsAWS Summit Berlin 2012 Talk on Web Data Commons
AWS Summit Berlin 2012 Talk on Web Data Commons
 
The Perfect Storm: The Impact of Analytics, Big Data and Analytics
The Perfect Storm: The Impact of Analytics, Big Data and AnalyticsThe Perfect Storm: The Impact of Analytics, Big Data and Analytics
The Perfect Storm: The Impact of Analytics, Big Data and Analytics
 
Leveraging PowerPivot
Leveraging PowerPivotLeveraging PowerPivot
Leveraging PowerPivot
 
Ofm msft-interop-v5c-132827
Ofm msft-interop-v5c-132827Ofm msft-interop-v5c-132827
Ofm msft-interop-v5c-132827
 
An overview of Microsoft data mining technology
An overview of Microsoft data mining technologyAn overview of Microsoft data mining technology
An overview of Microsoft data mining technology
 
Secrets of Enterprise Data Mining
Secrets of Enterprise Data Mining Secrets of Enterprise Data Mining
Secrets of Enterprise Data Mining
 
Domino app dev competitive advantage final
Domino app dev competitive advantage finalDomino app dev competitive advantage final
Domino app dev competitive advantage final
 
Agile NoSQL With XRX
Agile NoSQL With XRXAgile NoSQL With XRX
Agile NoSQL With XRX
 
Transitioning a Full Enterprise to Cloud in 10 Months - Cloud Expo
Transitioning a Full Enterprise to Cloud in 10 Months - Cloud ExpoTransitioning a Full Enterprise to Cloud in 10 Months - Cloud Expo
Transitioning a Full Enterprise to Cloud in 10 Months - Cloud Expo
 
Netflix Velocity Conference 2011
Netflix Velocity Conference 2011Netflix Velocity Conference 2011
Netflix Velocity Conference 2011
 

More from Sid Anand

Building High Fidelity Data Streams (QCon London 2023)
Building High Fidelity Data Streams (QCon London 2023)Building High Fidelity Data Streams (QCon London 2023)
Building High Fidelity Data Streams (QCon London 2023)Sid Anand
 
Building & Operating High-Fidelity Data Streams - QCon Plus 2021
Building & Operating High-Fidelity Data Streams - QCon Plus 2021Building & Operating High-Fidelity Data Streams - QCon Plus 2021
Building & Operating High-Fidelity Data Streams - QCon Plus 2021Sid Anand
 
Low Latency Fraud Detection & Prevention
Low Latency Fraud Detection & PreventionLow Latency Fraud Detection & Prevention
Low Latency Fraud Detection & PreventionSid Anand
 
YOW! Data Keynote (2021)
YOW! Data Keynote (2021)YOW! Data Keynote (2021)
YOW! Data Keynote (2021)Sid Anand
 
Big Data, Fast Data @ PayPal (YOW 2018)
Big Data, Fast Data @ PayPal (YOW 2018)Big Data, Fast Data @ PayPal (YOW 2018)
Big Data, Fast Data @ PayPal (YOW 2018)Sid Anand
 
Building Better Data Pipelines using Apache Airflow
Building Better Data Pipelines using Apache AirflowBuilding Better Data Pipelines using Apache Airflow
Building Better Data Pipelines using Apache AirflowSid Anand
 
Cloud Native Predictive Data Pipelines (micro talk)
Cloud Native Predictive Data Pipelines (micro talk)Cloud Native Predictive Data Pipelines (micro talk)
Cloud Native Predictive Data Pipelines (micro talk)Sid Anand
 
Cloud Native Data Pipelines (GoTo Chicago 2017)
Cloud Native Data Pipelines (GoTo Chicago 2017)Cloud Native Data Pipelines (GoTo Chicago 2017)
Cloud Native Data Pipelines (GoTo Chicago 2017)Sid Anand
 
Cloud Native Data Pipelines (DataEngConf SF 2017)
Cloud Native Data Pipelines (DataEngConf SF 2017)Cloud Native Data Pipelines (DataEngConf SF 2017)
Cloud Native Data Pipelines (DataEngConf SF 2017)Sid Anand
 
Cloud Native Data Pipelines (in Eng & Japanese) - QCon Tokyo
Cloud Native Data Pipelines (in Eng & Japanese)  - QCon TokyoCloud Native Data Pipelines (in Eng & Japanese)  - QCon Tokyo
Cloud Native Data Pipelines (in Eng & Japanese) - QCon TokyoSid Anand
 
Cloud Native Data Pipelines (QCon Shanghai & Tokyo 2016)
Cloud Native Data Pipelines (QCon Shanghai & Tokyo 2016)Cloud Native Data Pipelines (QCon Shanghai & Tokyo 2016)
Cloud Native Data Pipelines (QCon Shanghai & Tokyo 2016)Sid Anand
 
Introduction to Apache Airflow - Data Day Seattle 2016
Introduction to Apache Airflow - Data Day Seattle 2016Introduction to Apache Airflow - Data Day Seattle 2016
Introduction to Apache Airflow - Data Day Seattle 2016Sid Anand
 
Airflow @ Agari
Airflow @ Agari Airflow @ Agari
Airflow @ Agari Sid Anand
 
Resilient Predictive Data Pipelines (GOTO Chicago 2016)
Resilient Predictive Data Pipelines (GOTO Chicago 2016)Resilient Predictive Data Pipelines (GOTO Chicago 2016)
Resilient Predictive Data Pipelines (GOTO Chicago 2016)Sid Anand
 
Resilient Predictive Data Pipelines (QCon London 2016)
Resilient Predictive Data Pipelines (QCon London 2016)Resilient Predictive Data Pipelines (QCon London 2016)
Resilient Predictive Data Pipelines (QCon London 2016)Sid Anand
 
Software Developer and Architecture @ LinkedIn (QCon SF 2014)
Software Developer and Architecture @ LinkedIn (QCon SF 2014)Software Developer and Architecture @ LinkedIn (QCon SF 2014)
Software Developer and Architecture @ LinkedIn (QCon SF 2014)Sid Anand
 
LinkedIn's Segmentation & Targeting Platform (Hadoop Summit 2013)
LinkedIn's Segmentation & Targeting Platform (Hadoop Summit 2013)LinkedIn's Segmentation & Targeting Platform (Hadoop Summit 2013)
LinkedIn's Segmentation & Targeting Platform (Hadoop Summit 2013)Sid Anand
 
Building a Modern Website for Scale (QCon NY 2013)
Building a Modern Website for Scale (QCon NY 2013)Building a Modern Website for Scale (QCon NY 2013)
Building a Modern Website for Scale (QCon NY 2013)Sid Anand
 
Hands On with Maven
Hands On with MavenHands On with Maven
Hands On with MavenSid Anand
 
Learning git
Learning gitLearning git
Learning gitSid Anand
 

More from Sid Anand (20)

Building High Fidelity Data Streams (QCon London 2023)
Building High Fidelity Data Streams (QCon London 2023)Building High Fidelity Data Streams (QCon London 2023)
Building High Fidelity Data Streams (QCon London 2023)
 
Building & Operating High-Fidelity Data Streams - QCon Plus 2021
Building & Operating High-Fidelity Data Streams - QCon Plus 2021Building & Operating High-Fidelity Data Streams - QCon Plus 2021
Building & Operating High-Fidelity Data Streams - QCon Plus 2021
 
Low Latency Fraud Detection & Prevention
Low Latency Fraud Detection & PreventionLow Latency Fraud Detection & Prevention
Low Latency Fraud Detection & Prevention
 
YOW! Data Keynote (2021)
YOW! Data Keynote (2021)YOW! Data Keynote (2021)
YOW! Data Keynote (2021)
 
Big Data, Fast Data @ PayPal (YOW 2018)
Big Data, Fast Data @ PayPal (YOW 2018)Big Data, Fast Data @ PayPal (YOW 2018)
Big Data, Fast Data @ PayPal (YOW 2018)
 
Building Better Data Pipelines using Apache Airflow
Building Better Data Pipelines using Apache AirflowBuilding Better Data Pipelines using Apache Airflow
Building Better Data Pipelines using Apache Airflow
 
Cloud Native Predictive Data Pipelines (micro talk)
Cloud Native Predictive Data Pipelines (micro talk)Cloud Native Predictive Data Pipelines (micro talk)
Cloud Native Predictive Data Pipelines (micro talk)
 
Cloud Native Data Pipelines (GoTo Chicago 2017)
Cloud Native Data Pipelines (GoTo Chicago 2017)Cloud Native Data Pipelines (GoTo Chicago 2017)
Cloud Native Data Pipelines (GoTo Chicago 2017)
 
Cloud Native Data Pipelines (DataEngConf SF 2017)
Cloud Native Data Pipelines (DataEngConf SF 2017)Cloud Native Data Pipelines (DataEngConf SF 2017)
Cloud Native Data Pipelines (DataEngConf SF 2017)
 
Cloud Native Data Pipelines (in Eng & Japanese) - QCon Tokyo
Cloud Native Data Pipelines (in Eng & Japanese)  - QCon TokyoCloud Native Data Pipelines (in Eng & Japanese)  - QCon Tokyo
Cloud Native Data Pipelines (in Eng & Japanese) - QCon Tokyo
 
Cloud Native Data Pipelines (QCon Shanghai & Tokyo 2016)
Cloud Native Data Pipelines (QCon Shanghai & Tokyo 2016)Cloud Native Data Pipelines (QCon Shanghai & Tokyo 2016)
Cloud Native Data Pipelines (QCon Shanghai & Tokyo 2016)
 
Introduction to Apache Airflow - Data Day Seattle 2016
Introduction to Apache Airflow - Data Day Seattle 2016Introduction to Apache Airflow - Data Day Seattle 2016
Introduction to Apache Airflow - Data Day Seattle 2016
 
Airflow @ Agari
Airflow @ Agari Airflow @ Agari
Airflow @ Agari
 
Resilient Predictive Data Pipelines (GOTO Chicago 2016)
Resilient Predictive Data Pipelines (GOTO Chicago 2016)Resilient Predictive Data Pipelines (GOTO Chicago 2016)
Resilient Predictive Data Pipelines (GOTO Chicago 2016)
 
Resilient Predictive Data Pipelines (QCon London 2016)
Resilient Predictive Data Pipelines (QCon London 2016)Resilient Predictive Data Pipelines (QCon London 2016)
Resilient Predictive Data Pipelines (QCon London 2016)
 
Software Developer and Architecture @ LinkedIn (QCon SF 2014)
Software Developer and Architecture @ LinkedIn (QCon SF 2014)Software Developer and Architecture @ LinkedIn (QCon SF 2014)
Software Developer and Architecture @ LinkedIn (QCon SF 2014)
 
LinkedIn's Segmentation & Targeting Platform (Hadoop Summit 2013)
LinkedIn's Segmentation & Targeting Platform (Hadoop Summit 2013)LinkedIn's Segmentation & Targeting Platform (Hadoop Summit 2013)
LinkedIn's Segmentation & Targeting Platform (Hadoop Summit 2013)
 
Building a Modern Website for Scale (QCon NY 2013)
Building a Modern Website for Scale (QCon NY 2013)Building a Modern Website for Scale (QCon NY 2013)
Building a Modern Website for Scale (QCon NY 2013)
 
Hands On with Maven
Hands On with MavenHands On with Maven
Hands On with Maven
 
Learning git
Learning gitLearning git
Learning git
 

Recently uploaded

How to become a GDSC Lead GDSC MI AOE.pptx
How to become a GDSC Lead GDSC MI AOE.pptxHow to become a GDSC Lead GDSC MI AOE.pptx
How to become a GDSC Lead GDSC MI AOE.pptxKaustubhBhavsar6
 
From the origin to the future of Open Source model and business
From the origin to the future of  Open Source model and businessFrom the origin to the future of  Open Source model and business
From the origin to the future of Open Source model and businessFrancesco Corti
 
Outage Analysis: March 5th/6th 2024 Meta, Comcast, and LinkedIn
Outage Analysis: March 5th/6th 2024 Meta, Comcast, and LinkedInOutage Analysis: March 5th/6th 2024 Meta, Comcast, and LinkedIn
Outage Analysis: March 5th/6th 2024 Meta, Comcast, and LinkedInThousandEyes
 
The Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightThe Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightSafe Software
 
TrustArc Webinar - How to Live in a Post Third-Party Cookie World
TrustArc Webinar - How to Live in a Post Third-Party Cookie WorldTrustArc Webinar - How to Live in a Post Third-Party Cookie World
TrustArc Webinar - How to Live in a Post Third-Party Cookie WorldTrustArc
 
How to release an Open Source Dataweave Library
How to release an Open Source Dataweave LibraryHow to release an Open Source Dataweave Library
How to release an Open Source Dataweave Libraryshyamraj55
 
Top 10 Squarespace Development Companies
Top 10 Squarespace Development CompaniesTop 10 Squarespace Development Companies
Top 10 Squarespace Development CompaniesTopCSSGallery
 
March Patch Tuesday
March Patch TuesdayMarch Patch Tuesday
March Patch TuesdayIvanti
 
Keep Your Finger on the Pulse of Your Building's Performance with IES Live
Keep Your Finger on the Pulse of Your Building's Performance with IES LiveKeep Your Finger on the Pulse of Your Building's Performance with IES Live
Keep Your Finger on the Pulse of Your Building's Performance with IES LiveIES VE
 
The New Cloud World Order Is FinOps (Slideshow)
The New Cloud World Order Is FinOps (Slideshow)The New Cloud World Order Is FinOps (Slideshow)
The New Cloud World Order Is FinOps (Slideshow)codyslingerland1
 
Webinar: The Art of Prioritizing Your Product Roadmap by AWS Sr PM - Tech
Webinar: The Art of Prioritizing Your Product Roadmap by AWS Sr PM - TechWebinar: The Art of Prioritizing Your Product Roadmap by AWS Sr PM - Tech
Webinar: The Art of Prioritizing Your Product Roadmap by AWS Sr PM - TechProduct School
 
Design and Modeling for MySQL SCALE 21X Pasadena, CA Mar 2024
Design and Modeling for MySQL SCALE 21X Pasadena, CA Mar 2024Design and Modeling for MySQL SCALE 21X Pasadena, CA Mar 2024
Design and Modeling for MySQL SCALE 21X Pasadena, CA Mar 2024Alkin Tezuysal
 
The Importance of Indoor Air Quality (English)
The Importance of Indoor Air Quality (English)The Importance of Indoor Air Quality (English)
The Importance of Indoor Air Quality (English)IES VE
 
.NET 8 ChatBot with Azure OpenAI Services.pptx
.NET 8 ChatBot with Azure OpenAI Services.pptx.NET 8 ChatBot with Azure OpenAI Services.pptx
.NET 8 ChatBot with Azure OpenAI Services.pptxHansamali Gamage
 
Novo Nordisk's journey in developing an open-source application on Neo4j
Novo Nordisk's journey in developing an open-source application on Neo4jNovo Nordisk's journey in developing an open-source application on Neo4j
Novo Nordisk's journey in developing an open-source application on Neo4jNeo4j
 
Introduction to RAG (Retrieval Augmented Generation) and its application
Introduction to RAG (Retrieval Augmented Generation) and its applicationIntroduction to RAG (Retrieval Augmented Generation) and its application
Introduction to RAG (Retrieval Augmented Generation) and its applicationKnoldus Inc.
 
Extra-120324-Visite-Entreprise-icare.pdf
Extra-120324-Visite-Entreprise-icare.pdfExtra-120324-Visite-Entreprise-icare.pdf
Extra-120324-Visite-Entreprise-icare.pdfInfopole1
 
3 Pitfalls Everyone Should Avoid with Cloud Data
3 Pitfalls Everyone Should Avoid with Cloud Data3 Pitfalls Everyone Should Avoid with Cloud Data
3 Pitfalls Everyone Should Avoid with Cloud DataEric D. Schabell
 
UiPath Studio Web workshop series - Day 4
UiPath Studio Web workshop series - Day 4UiPath Studio Web workshop series - Day 4
UiPath Studio Web workshop series - Day 4DianaGray10
 

Recently uploaded (20)

How to become a GDSC Lead GDSC MI AOE.pptx
How to become a GDSC Lead GDSC MI AOE.pptxHow to become a GDSC Lead GDSC MI AOE.pptx
How to become a GDSC Lead GDSC MI AOE.pptx
 
From the origin to the future of Open Source model and business
From the origin to the future of  Open Source model and businessFrom the origin to the future of  Open Source model and business
From the origin to the future of Open Source model and business
 
Outage Analysis: March 5th/6th 2024 Meta, Comcast, and LinkedIn
Outage Analysis: March 5th/6th 2024 Meta, Comcast, and LinkedInOutage Analysis: March 5th/6th 2024 Meta, Comcast, and LinkedIn
Outage Analysis: March 5th/6th 2024 Meta, Comcast, and LinkedIn
 
The Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightThe Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and Insight
 
TrustArc Webinar - How to Live in a Post Third-Party Cookie World
TrustArc Webinar - How to Live in a Post Third-Party Cookie WorldTrustArc Webinar - How to Live in a Post Third-Party Cookie World
TrustArc Webinar - How to Live in a Post Third-Party Cookie World
 
How to release an Open Source Dataweave Library
How to release an Open Source Dataweave LibraryHow to release an Open Source Dataweave Library
How to release an Open Source Dataweave Library
 
Top 10 Squarespace Development Companies
Top 10 Squarespace Development CompaniesTop 10 Squarespace Development Companies
Top 10 Squarespace Development Companies
 
SheDev 2024
SheDev 2024SheDev 2024
SheDev 2024
 
March Patch Tuesday
March Patch TuesdayMarch Patch Tuesday
March Patch Tuesday
 
Keep Your Finger on the Pulse of Your Building's Performance with IES Live
Keep Your Finger on the Pulse of Your Building's Performance with IES LiveKeep Your Finger on the Pulse of Your Building's Performance with IES Live
Keep Your Finger on the Pulse of Your Building's Performance with IES Live
 
The New Cloud World Order Is FinOps (Slideshow)
The New Cloud World Order Is FinOps (Slideshow)The New Cloud World Order Is FinOps (Slideshow)
The New Cloud World Order Is FinOps (Slideshow)
 
Webinar: The Art of Prioritizing Your Product Roadmap by AWS Sr PM - Tech
Webinar: The Art of Prioritizing Your Product Roadmap by AWS Sr PM - TechWebinar: The Art of Prioritizing Your Product Roadmap by AWS Sr PM - Tech
Webinar: The Art of Prioritizing Your Product Roadmap by AWS Sr PM - Tech
 
Design and Modeling for MySQL SCALE 21X Pasadena, CA Mar 2024
Design and Modeling for MySQL SCALE 21X Pasadena, CA Mar 2024Design and Modeling for MySQL SCALE 21X Pasadena, CA Mar 2024
Design and Modeling for MySQL SCALE 21X Pasadena, CA Mar 2024
 
The Importance of Indoor Air Quality (English)
The Importance of Indoor Air Quality (English)The Importance of Indoor Air Quality (English)
The Importance of Indoor Air Quality (English)
 
.NET 8 ChatBot with Azure OpenAI Services.pptx
.NET 8 ChatBot with Azure OpenAI Services.pptx.NET 8 ChatBot with Azure OpenAI Services.pptx
.NET 8 ChatBot with Azure OpenAI Services.pptx
 
Novo Nordisk's journey in developing an open-source application on Neo4j
Novo Nordisk's journey in developing an open-source application on Neo4jNovo Nordisk's journey in developing an open-source application on Neo4j
Novo Nordisk's journey in developing an open-source application on Neo4j
 
Introduction to RAG (Retrieval Augmented Generation) and its application
Introduction to RAG (Retrieval Augmented Generation) and its applicationIntroduction to RAG (Retrieval Augmented Generation) and its application
Introduction to RAG (Retrieval Augmented Generation) and its application
 
Extra-120324-Visite-Entreprise-icare.pdf
Extra-120324-Visite-Entreprise-icare.pdfExtra-120324-Visite-Entreprise-icare.pdf
Extra-120324-Visite-Entreprise-icare.pdf
 
3 Pitfalls Everyone Should Avoid with Cloud Data
3 Pitfalls Everyone Should Avoid with Cloud Data3 Pitfalls Everyone Should Avoid with Cloud Data
3 Pitfalls Everyone Should Avoid with Cloud Data
 
UiPath Studio Web workshop series - Day 4
UiPath Studio Web workshop series - Day 4UiPath Studio Web workshop series - Day 4
UiPath Studio Web workshop series - Day 4
 

LinkedIn Data Infrastructure (QCon London 2012)

  • 1. Data Infrastructure @ LinkedIn Sid Anand QCon London 2012 @r39132 1
  • 2. About Me Current Life…   LinkedIn *   Web / Software Engineering   Search, Network, and Analytics (SNA)   Distributed Data Systems (DDS)   Me In a Previous Life…   Netflix, Cloud Database Architect   eBay, Web Development, Research Lab, & Search Engine *** And Many Years Prior…   Studying Distributed Systems at Cornell University @r39132 2
  • 3. Our mission Connect the world’s professionals to make them more productive and successful @r39132 3
  • 4. The world’s largest professional network Over 60% of members are now international 75% ** 150M+ * 90 Fortune 100 Companies use LinkedIn to hire >2M *** Company Pages 55 32 16 Languages ~4.2B 17 *** 8 2 4 Professional searches in 2011 2004 2005 2006 2007 2008 2009 2010 *as of February 9, 2012 **as of September 30, 2011 LinkedIn Members (Millions) ***as of December 31, 2011 @r39132 4
  • 5. Other Company Facts •  Headquartered in Mountain View, Calif., with offices around the world! * •  As of December 31, 2011, LinkedIn has 2,116 full-time employees located around the world. •  Currently around 650 people work in Engineering •  400 in Web/Software Engineering •  Plan to add another 200 in 2012 *** •  250 in Operations *as of February 9, 2012 **as of September 30, 2011 ***as of December 31, 2011 @r39132 5
  • 6. Agenda   Company Overview   Architecture –  Data Infrastructure Overview –  Technology Spotlight   Oracle   Voldemort   DataBus   Kafka   Q & A @r39132 6
  • 7. LinkedIn : Architecture Overview •  Our site runs primarily on Java, with some use of Scala for specific infrastructure •  What runs on Scala? •  Network Graph Service •  Kafka •  Most of our services run on Apache + Jetty @r39132 7
  • 8. LinkedIn : Architecture   A web page requests information A and B Presentation Tier   A thin layer focused on building the UI. It assembles the page by making parallel requests to BST services A A B B C C Business Service Tier   Encapsulates business logic. Can call other BST clusters and its own DST cluster. A A B B C C Data Service Tier   Encapsulates DAL logic and concerned with one Oracle Schema. Data Infrastructure   Concerned with the Master Slave Master Master persistent storage of and easy access to data Memcached Voldemort @r39132 8
  • 9. LinkedIn : Architecture   A web page requests information A and B Presentation Tier   A thin layer focused on building the UI. It assembles the page by making parallel requests to BST services A A B B C C Business Service Tier   Encapsulates business logic. Can call other BST clusters and its own DST cluster. A A B B C C Data Service Tier   Encapsulates DAL logic and concerned with one Oracle Schema. Data Infrastructure   Concerned with the Master Slave Master Master persistent storage of and easy access to data Memcached Voldemort @r39132 9
  • 10. LinkedIn : Data Infrastructure Technologies •  Database Technologies •  Oracle •  Voldemort * •  Espresso •  Data Replication Technologies •  Kafka •  DataBus •  Search Technologies •  Zoie – real-time search and indexing with Lucene •  Bobo – faceted search library for Lucene *** •  SenseiDB – fast, real-time, faceted, KV and full-text Search Engine and more @r39132 10
  • 11. LinkedIn : Data Infrastructure Technologies This talk will focus on a few of the key technologies below! •  Database Technologies * •  Oracle •  Voldemort •  Espresso – A new K-V store under development •  Data Replication Technologies •  Kafka •  DataBus •  Search Technologies *** •  Zoie – real-time search and indexing with Lucene •  Bobo – faceted search library for Lucene •  SenseiDB – fast, real-time, faceted, KV and full-text Search Engine and more @r39132 11
  • 12. LinkedIn Data Infrastructure Technologies Oracle: Source of Truth for User-Provided Data @r39132 12
  • 13. Oracle : Overview Oracle •  All user-provided data is stored in Oracle – our current source of truth •  About 50 Schemas running on tens of physical instances * •  With our user base and traffic growing at an accelerating pace, so how do we scale Oracle for user-provided data? Scaling Reads •  Oracle Slaves (c.f. DSC) •  Memcached •  Voldemort – for key-value lookups *** Scaling Writes •  Move to more expensive hardware or replace Oracle with something else @r39132 13
  • 14. Oracle : Overview – Data Service Context Scaling Oracle Reads using DSC   DSC uses a token (e.g. cookie) to ensure that a reader always sees his or her own writes immediately –  If I update my own status, it is okay if you don’t see the change for a few minutes, but I have to see it immediately @r39132 14
  • 15. Oracle : Overview – How DSC Works?   When a user writes data to the master, the DSC token (for that data domain) is updated with a timestamp   When the user reads data, we first attempt to read from a replica (a.k.a. slave) database   If the data in the slave is older than the data in the DSC token, we read from the Master instead @r39132 15
  • 16. LinkedIn Data Infrastructure Technologies Voldemort: Highly-Available Distributed Data Store @r39132 16
  • 17. Voldemort : Overview •  A distributed, persistent key-value store influenced by the AWS Dynamo paper •  Key Features of Dynamo   Highly Scalable, Available, and Performant   Achieves this via Tunable Consistency •  For higher consistency, the user accepts lower availability, scalability, and performance, and vice-versa   Provides several self-healing mechanisms when data does become inconsistent •  Read Repair   Repairs value for a key when the key is looked up/read •  Hinted Handoff   Buffers value for a key that wasn’t successfully written, then writes it later •  Anti-Entropy Repair   Scans the entire data set on a node and fixes it   Provides means to detect node failure and a means to recover from node failure •  Failure Detection •  Bootstrapping New Nodes @r39132 17
  • 18. Voldemort : Overview API Layered, Pluggable Architecture VectorClock<V> get (K key) Client put (K key, VectorClock<V> value) applyUpdate(UpdateAction action, int retries) Client API Conflict Resolution Voldemort-specific Features Serialization   Implements a layered, pluggable Repair Mechanism architecture Failure Detector Routing   Each layer implements a common interface (c.f. API). This allows us to replace or remove implementations at any layer Server Repair Mechanism •  Pluggable data storage layer Failure Detector Admin   BDB JE, Custom RO storage, Routing etc… Storage Engine •  Pluggable routing supports   Single or Multi-datacenter routing @r39132 18
  • 19. Voldemort : Overview Layered, Pluggable Architecture Client Voldemort-specific Features Client API Conflict Resolution •  Supports Fat client or Fat Server Serialization Repair Mechanism •  Repair Mechanism + Failure Failure Detector Detector + Routing can run on Routing server or client Server •  LinkedIn currently runs Fat Client, but we Repair Mechanism would like to move this to a Fat Server Failure Detector Admin Model Routing Storage Engine @r39132 19
  • 20. Where Does LinkedIn use Voldemort? @r39132 20
  • 21. Voldemort : Usage Patterns @ LinkedIn 2 Usage-Patterns   Read-Write Store –  Uses BDB JE for the storage engine –  50% of Voldemort Stores (aka Tables) are RW   Read-only Store –  Uses a custom Read-only format –  50% of Voldemort Stores (aka Tables) are RO   Let’s look at the RO Store @r39132 21
  • 22. Voldemort : RO Store Usage at LinkedIn People You May Know Viewers of this profile also viewed Related Searches Events you may be interested in LinkedIn Skills Jobs you may be interested in @r39132 22
  • 23. Voldemort : Usage Patterns @ LinkedIn RO Store Usage Pattern 1.  Use Hadoop to build a model 2.  Voldemort loads the output of Hadoop 3.  Voldemort serves fast key-value look-ups on the site –  e.g. For key=“Sid Anand”, get all the people that “Sid Anand” may know! –  e.g. For key=“Sid Anand”, get all the jobs that “Sid Anand” may be interested in! @r39132 23
  • 24. How Do The Voldemort RO Stores Perform? @r39132 24
  • 25. Voldemort : RO Store Performance : TP vs. Latency ● MySQL ● Voldemort ● MySQL ● Voldemort ● Voldemort ● MySQL median MySQL ● Voldemort 99th percentile ● median median ● 99th percentile ●percentile 99th median 99th percentile 3.5 ● ● ● 250 ● ● ● ● ● 3.5 3.5 ● 250 ● ● ● 3.5 3.0 ● 250 ● ● ● ● ● ● 250 ● ● ● 3.0 ● 200 ● ● ● ● ● ● 3.0 ● ● ● ● ● ● 3.0 ● latency (ms) 2.5 ● 200 ● ● ● ● 200 200 latency (ms) 2.5 latency (ms) 2.5 ● ● ● ● latency (ms) 2.5 2.0 ● ● ● ● ● ● ● 150 ● 2.0 2.0● ● ●● ● ● 150 150 2.0 ● 1.5 ● ● ● ● 150 ● ● ● ● ● ● ● 100 ●● ● ● 1.5 1.5 ● ● ● ●● ● ● 1.5 1.0 ● ● 100 100 ● ● ● ● ● ● 100 ●● ● 1.0 ● 1.0 ● 50 ● ● ● ● ● ● ● ● 1.0 0.5 ● ● ● ● 50 ● ● 50 ● ● ● ● 0.5 0.5 50 ● 0.5 0.0 0 0.0 0.0 0 0 0.0 100 200 300 400 500 600 700 0 100 200 300 400 500 600 700 100 200 300 400 500 600 700 throughput 300 400 500 600 700 500 600 700 100 200 100 200 300 400 500 600 700 (qps) 200 300 400 100 100 200 300 400 500 600 700 100 200 300 400 500 600 700 throughput (qps) (qps) throughput (qps) throughput 100 GB data, 24 GB RAM @r39132 25
  • 26. LinkedIn Data Infrastructure Solutions Databus : Timeline-Consistent Change Data Capture @r39132 26
  • 27. Where Does LinkedIn use DataBus? @r39132 27
  • 28. DataBus : Use-Cases @ LinkedIn Updates Standard Search Graph Read ization Index Index Replicas Oracle Data Change Events A user updates his profile with skills and position history. He also accepts a connection •  The write is made to an Oracle Master and DataBus replicates: •  the profile change to the Standardization service   E.G. the many forms of IBM are canonicalized for search-friendliness and recommendation-friendliness •  the profile change to the Search Index service   Recruiters can find you immediately by new keywords •  the connection change to the Graph Index service   The user can now start receiving feed updates from his new connections immediately @r39132 28
  • 29. DataBus : Architecture DataBus consists of 2 services Capture Relay •  Relay Services DB Changes Event Win •  Sharded •  Maintain an in-memory buffer per On-line shard Changes •  Each shard polls Oracle and then deserializes transactions into Avro Bootstrap •  Bootstrap Service DB •  Picks up online changes as they appear in the Relay •  Supports 2 types of operations from clients   If a client falls behind and needs records older than what the relay has, Bootstrap can send consolidated deltas!   If a new client comes on line, Bootstrap can send a consistent snapshot @r39132 29
  • 30. DataBus : Architecture Client Capture Relay Consumer 1 Client Lib On-line Databus DB Changes Changes Event Win Consumer n On-line ated Changes solid Con eT Sinc Delta Client Bootstrap Consumer 1 Client Lib Consistent Databus Snapshot at U Consumer n DB Guarantees   Transactional semantics   In-commit-order Delivery   At-least-once delivery   Durability (by data source)   High-availability and reliability   Low latency @r39132 30
  • 31. DataBus : Architecture - Bootstrap   Generate consistent snapshots and consolidated deltas during continuous updates with long-running queries Client Read Consumer 1 Client Lib Databus Online Relay Changes Consumer n Event Win Read Changes Bootstrap server Bootstrap Read Log Writer Server recent events Replay Log Storage events Snapshot Storage Log Applier @r39132 31
  • 32. LinkedIn Data Infrastructure Solutions Kafka: High-Volume Low-Latency Messaging System @r39132 32
  • 33. Kafka : Usage at LinkedIn Where as DataBus is used for Database change capture and replication, Kafka is used for application-level data streams Examples: •  End-user Action Tracking (a.k.a. Web Tracking) of •  Emails opened •  Pages seen •  Links followed •  Executing Searches •  Operational Metrics •  Network & System metrics such as •  TCP metrics (connection resets, message resends, etc…) •  System metrics (iops, CPU, load average, etc…) @r39132 33
  • 34. Kafka : Overview Broker Tier WebTier Consumers Push Sequential write sendfile Pull Events Events Iterator 1 Client Lib Topic 1 Kafka 100 MB/sec 200 MB/sec Topic 2 Iterator n Topic N Topic  Offset Topic, Partition Offset Ownership Zookeeper Management Features Guarantees Scale   Pub/Sub   At least once delivery   Billions of Events   Batch Send/Receive   Very high throughput   TBs per day   System Decoupling   Low latency   Inter-colo: few seconds   Durability   Typical retention: weeks   Horizontally Scalable @r39132 34
  • 35. Kafka : Overview Key Design Choices •  When reading from a file and sending to network socket, we typically incur 4 buffer copies and 2 OS system calls   Kafka leverages a sendFile API to eliminate 2 of the buffer copies and 1 of the system calls •  No double-buffering of messages - we rely on the OS page cache and do not store a copy of the message in the JVM   Less pressure on memory and GC   If the Kafka process is restarted on a machine, recently accessed messages are still in the page cache, so we get the benefit of a warm start •  Kafka doesn't keep track of which messages have yet to be consumed -- i.e. no book keeping overhead   Instead, messages have time-based SLA expiration -- after 7 days, messages are deleted @r39132 35
  • 36. How Does Kafka Perform? @r39132 36
  • 37. Kafka : Performance : Throughput vs. Latency (100 topics, 1 producer, 1 broker) 250 200 Consumer latency in ms 150 100 50 0 0 20 40 60 80 100 Producer throughput in MB/sec @r39132 37
  • 38. Kafka : Performance : Linear Incremental Scalability (10 topics, broker flush interval 100K) 400 381 350 300 Throughput in MB/s 293 250 200 190 150 100 101 50 0 1 broker 2 brokers 3 brokers 4 brokers @r39132 38
  • 39. Kafka : Performance : Resilience as Messages Pile Up (1  topic,  broker  flush  interval  10K)   200000 160000 Throughput  in  msg/s   120000 80000 40000 0 10 105 199 294 388 473 567 662 756 851 945 1039 Unconsumed data in GB @r39132 39
  • 40. Acknowledgments Presentation & Content   Chavdar Botev (DataBus) @cbotev   Roshan Sumbaly (Voldemort) @rsumbaly   Neha Narkhede (Kafka) @nehanarkhede Development Team Aditya Auradkar, Chavdar Botev, Shirshanka Das, Dave DeMaagd, Alex Feinberg, Phanindra Ganti, Lei Gao, Bhaskar Ghosh, Kishore Gopalakrishna, Brendan Harris, Joel Koshy, Kevin Krawez, Jay Kreps, Shi Lu, Sunil Nagaraj, Neha Narkhede, Sasha Pachev, Igor Perisic, Lin Qiao, Tom Quiggle, Jun Rao, Bob Schulman, Abraham Sebastian, Oliver Seeliger, Adam Silberstein, Boris Skolnick, Chinmay Soman, Roshan Sumbaly, Kapil Surlaker, Sajid Topiwala, Balaji Varadarajan, Jemiah Westerman, Zach White, David Zhang, and Jason Zhang @r39132 40
  • 41. y Questions? @r39132 41