3. 3
Overview
• Problem Statement
• Solution – How LinkedIn trafficshift’s
• Datacenter shifting
• PoP steering
• Challenges of APAC region
• IPv4 vs IPv6
• Questions
4. $ whoami
4
Michael Kehoe
• Staff Site Reliability Engineer (SRE) @ LinkedIn
• Production-SRE team
• Funny accent = Australian + 3 years American
5. $ whatis SRE
5
Michael Kehoe
• Site Reliability Engineering
• Operations for the production application
environment
• Responsibilities include
• Architecture design
• Capacity planning
• Operations
• Tooling
• Responsibilities include DNS/ CDN management &
Traffic infrastructure
6. 6
Terminology
• PoP - Where LinkedIn terminates incoming requests.
• Fabric – Datacenter with full LinkedIn production stack deployed
• Loadtest – Stress test of a Fabric – to simulate a disaster scenario
7. Disaster Recovery
7
Problem Statement
• Fail between Fabrics
• Performance of applications is degraded
• Validate disaster recovery (DR) scenario
• Expose bugs and suboptimal configurations via loadtest
• Planned maintenance
• Fail between PoP’s
• Mitigate impact of a 3rd party provider maintenance/ failure (e.g. transport links)
• Software/ Configuration Bugs
8. Performance
8
Problem Statement
• Fabric Assignment
• Assign preferred and secondary fabric to all members based on:
• Member location
• Capacity
• PoP/ CDN steering
• Use GeoDNS to steer user to ‘best’ PoP
• Use RUM DNS to steer users to ’best’ CDN
12. Site Speed
12
Problem Statement
• Site Speed affects User Engagement
• User Engagement affects page-views & transactions
• Bottom Line: Site Speed has an impact on revenue
13. • Site Speed affects User Engagement
13
Problem Statement
16. Fabric shifting
16
Solution
• Stickyrouting
• Using a Hadoop job, we calculate a primary and
secondary datacenter for the user based on
location
• This data is stored in a Key-Value store
(Espresso)
• Stickyrouting serves this information over a
RESTful interface to our Edge PoP’s
17. Fabric shifting
17
Solution
• Different traffic types are partitioned and controlled separately
• Logged-In vs Logged-out
• CDN’s
• Monitoring
• Microsites
• Logged-in users are placed into ‘buckets’
• Buckets are marked online/ offline to move site traffic
18. Fabric shifting
18
Solution
• Stickyrouting – Benefits
• Ensure we serve the request as close to the user as possible
• Capacity management for datacenters
• We can assign a percentage of users to a datacenter
• Enables personal data routing (PDR)
• Only store data where we need it
26. LinkedIn’s PoP Architecture
26
Solution
• Using IPVS - Each PoP announces a unicast address and a regional anycast
address
• APAC, EU and NAMER anycast regions
• Use GeoDNS to steer users to the ‘best’ PoP
• DNS will either provide users with an anycast or unicast address for
www.linkedin.com
• US and EU members is nearly all anycast
• APAC is all unicast
27. LinkedIn’s PoP DR
27
Solution
• Sometimes need to fail out of PoP’s
• 3rd party provider issues (e.g. transit links
going down)
• Infrastructure maintenance
• Withdraw anycast route announcements
• Fail healthchecks on proxy to drain unicast
traffic
28. LinkedIn’s PoP Performance
28
Solution
• PoP DNS Steering
• LinkedIn currently uses GeoDNS for routing
• Piloting RumDNS
• Pick the best PoP based on network, not country
• CDN Steering
• Mix CDN’s to get best performance
• Constantly evaluate performance/ availability
• Automatically adjust CDN weighting
30. Working around fiber cuts
30
APAC Challenges
• Case Study: Fail out of India PoP due to fiber cuts
Connection Time for Indian members (90th percentile)
31. ASN 15802
ASN 5384
GeoDNS Suboptimal PoP’s
31
APAC Challenges
Source: http://www.submarinecablemap.com/#/submarine-cable/bay-of-bengal-gateway-bbg
SingaporeMumbai
45 ms
220 ms
70 ms
ASN 15802 RTT to Singapore is (220+70) 290ms (all at 50th percentile)
32. GeoDNS Suboptimal PoP’s
32
APAC Challenges
London
Dublin
SingaporeMumbai
160 ms
45 ms
ASN 15802
ASN 5384
70 ms
35 ms
350 ms
Hong
Kong160 ms
34. Performance & Adoption
34
IPv4 vs IPv6
• IPv6 performs better for our members
• Less request time-outs on IPv6 for mobile users
• Mobile carriers are adopting IPv6 faster
• Win for LinkedIn and our members!
• In July 2014 (IPv6 launch): 3% of traffic was IPv6
• Today: ~12% of traffic is IPv6
35. Key Takeaways
35
Conclusion
• Application level traffic engineering is extremely important for content providers
• RUM data is extremely useful for finding anomalies
• Route traffic based on performance, not just location
• IPv6 performs better for LinkedIn users