• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Real-time systems at Twitter (Velocity 2012)
 

Real-time systems at Twitter (Velocity 2012)

on

  • 9,895 views

Our talk covers the migration of the Twitter architecture from primarily Ruby on Rails (RoR) to a JVM-based SOA system with emphasis on high performance, scalability, and resilience to failure. ...

Our talk covers the migration of the Twitter architecture from primarily Ruby on Rails (RoR) to a JVM-based SOA system with emphasis on high performance, scalability, and resilience to failure. General lessons include the advantages of asynchronous, real-time architectures over synchronous, process / thread-oriented systems, as well as caching and data store patterns.

Statistics

Views

Total Views
9,895
Views on SlideShare
9,788
Embed Views
107

Actions

Likes
36
Downloads
0
Comments
0

8 Embeds 107

http://lanyrd.com 79
http://www.scoop.it 7
https://twitter.com 6
http://www.plurk.com 6
http://www.linkedin.com 5
http://pinterest.com 2
http://blog.geekple.com 1
https://www.linkedin.com 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Real-time systems at Twitter (Velocity 2012) Real-time systems at Twitter (Velocity 2012) Presentation Transcript

    • real-time systems @twitter @raffi & @a_a velocity 2012
    • ROUTING PRESENTATION LOGIC STORAGE & RETRIEVAL T-Bird T-Flock + Haplo Monorail Darkwing Flock(s)
    • what are the big problems?⇢ monolithic application⇢ lack of self-service infrastructure⇢ painful to add new services & features
    • what did we want to achieve?⇢ big infrastructure wins in speed, efficiency, reliability⇢ separation of concerns⇢ team independence
    • stats.timeFuture("request_latency_ms") { // dispatch to do work}
    • ts( AVG, timelineservice, audubon.role.timelineservice, service/client/ woodstar.prod/ getStatusTimeline/request_latency_ms.p50)
    • TFE(HTTP Proxy) Woodstar
    • TFE (HTTP Proxy)Monorail Woodstar
    • 100% Monorail TFE (HTTP Proxy) 0% Woodstar Monorail Woodstar
    • 0% Monorail TFE (HTTP Proxy) 100% Woodstar Monorail Woodstar
    • TFE(HTTP Proxy) Woodstar
    • TFE (HTTP Proxy) Woodstar Tweetypie GizmoduckTimeline (Tweet (UserService Service) Service)
    • network substrate⇢ connection management⇢ protocol codecs⇢ transient error handling⇢ service discovery⇢ observability
    • ServerBuilder() .name("ServiceName") .reportTo(statsReceiver) .tracer(ZipkinTracer()) .codec(Http()) .maxConcurrentRequests(1000) .requestTimeout(500.milliseconds) .build(Service[Request, Response])
    • ServerBuilder() .name("ServiceName") .reportTo(statsReceiver) .tracer(ZipkinTracer()) .codec(Http()) .maxConcurrentRequests(1000) .requestTimeout(500.milliseconds) .build(Service[Request, Response])
    • ServerBuilder() .name("ServiceName") .reportTo(statsReceiver) .tracer(ZipkinTracer()) .codec(Http()) .maxConcurrentRequests(1000) .requestTimeout(500.milliseconds) .build(Service[Request, Response])
    • ServerBuilder() .name("ServiceName") .reportTo(statsReceiver) .tracer(ZipkinTracer()) .codec(Http()) .maxConcurrentRequests(1000) .requestTimeout(500.milliseconds) .build(Service[Request, Response])
    • ServerBuilder() .name("ServiceName") .reportTo(statsReceiver) .tracer(ZipkinTracer()) .codec(Http()) .maxConcurrentRequests(1000) .requestTimeout(500.milliseconds) .build(Service[Request, Response])
    • ServerBuilder() .name("ServiceName") .reportTo(statsReceiver) .tracer(ZipkinTracer()) .codec(Http()) .maxConcurrentRequests(1000) .requestTimeout(500.milliseconds) .build(Service[Request, Response])
    • ServerBuilder() .name("ServiceName") .reportTo(statsReceiver) .tracer(ZipkinTracer()) .codec(Http()) .maxConcurrentRequests(1000) .requestTimeout(500.milliseconds) .build(Service[Request, Response])
    • TFE (HTTP Proxy) Woodstar Tweetypie GizmoduckTimeline (Tweet (UserService Service) Service)
    • ServerBuilder() .name("ServiceName") .reportTo(statsReceiver) .tracer(ZipkinTracer()) .codec(Http()) .maxConcurrentRequests(1000) .requestTimeout(500.milliseconds) .build(Service[Request, Response])
    • TFE (HTTP Proxy)Monorail Woodstar
    • TFE (HTTP Proxy)Macaw Macaw Macaw Woodstar+Activity +Search +Logging
    • class EchoLoadTest(service: ParrotThriftService)extends RecordProcessor { val client = new EchoService.ServiceToClient( service, new TBinaryProtocol.Factory()) def processLines( job: ParrotJob, lines: Seq[String]) { lines.map(client.echo(_)) }}
    • TFE(HTTP Proxy) Monorail
    • TFE(HTTP Proxy) Monorail Woodstar
    • ROUTING PRESENTATION LOGIC STORAGE & RETRIEVAL T-Bird T-Flock + Haplo Monorail Darkwing Flock(s)
    • ROUTING PRESENTATION LOGIC STORAGE & RETRIEVAL Tweetypie Monorail T-Bird Gizmoduck T-Flock + Woodstar Haplo TFE TLS Macaw Darkwing +Swift Social Graph Service Macaw Flock(s) +Disco Story Service
    • where are we?⇢ team organization that mimics the software stack⇢ able to launch massive features in parallel
    • mentions statuses/show users/show3000 500 4002250 375 3001500 250 200 750 125 100 0 0 0 p50 p95 p999 p50 p95 p999 p50 p95 p999
    • some more statistics⇢ 45% of traffic on the JVM stack⇢ we’re a lot faster⇢ we’re a lot more reliable⇢ we fix bugs faster⇢ 12 deploys, yesterday
    • #JoinTheFlock