• Save
Netflix API
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Netflix API

on

  • 1,235 views

This

This

Statistics

Views

Total Views
1,235
Views on SlideShare
1,234
Embed Views
1

Actions

Likes
7
Downloads
0
Comments
0

1 Embed 1

https://twitter.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Netflix’s goal is to build the best product for streaming tv shows and movies in the world.
  • We now have more than 36 million global subscribers in more than 50 countries and territories.
  • Those subscribers consume more than a billion hours of streaming video a month. Moreover, according to Sandvine, Netflix is responsible for delivering more than 33% of the peak Internet traffic in the US.
  • And we are now producing a fleet of original series, getting released throughout 2013, starting with House of Cards (released on February 1st).
  • And we recently followed House of Cards with our first original horror TV series, called Hemlock Grove.
  • And in May, we will be release all 15 episodes of Arrested Development, season 4!
  • All 36 million of Netflix’s subscribers are watching shows (like House of Cards) and movies on virtually any device that has a streaming video screen. We are now on more than 1000 different device types, most of which are supported by the Netflix API, to be discussed throughout this presentation.
  • To better understand the strategy, I should explain the basics of what the Netflix API supports. There are two types of interactions between Netflix customers and our streaming application… Discovery and Streaming.
  • Discovery is basically any event with a title other than streaming it. That includes browsing titles, looking for something watch, etc.
  • It also includes actions such as rating the title, adding it to your instant queue, etc.
  • Once the customer has identified a title to watch through the Discovery experience, the user can then play that title. Once the Play button is selected, the customer is sent to a different internal service that focuses on handling the streaming. That streaming service also interacts with our CDNs (including Open Connect) to actually deliver the streaming bits to the device for playback.
  • Today, the API powers the Discovery experience. The rest of these slides will only focus on Discovery, not the Streaming of the actual video bits.
  • All of this started with the launch of streaming in 2007. At the time, we were only streaming on computer-based players (ie. No devices, mobile phones, etc).
  • Shortly after streaming launched, in 2008, we launched our REST API. I describe it as a One-Size-Fits-All (OSFA) type of implementation because the API itself sets the rules and requires anyone who interfaces with it to adhere to those rules. Everyone is treated the same.
  • The OSFA REST API launched to support the 1,000 flowers model. That is, we would plant the seeds in the ground (by providing access to our content) and see what flowers sprout up in the myriad fields throughout the US. The 1,000 flowers are public API developers.
  • And at launch, the API was exclusively targeted towards and consumed by the 1,000 flowers (ie. External developers).
  • Some examples of the flowers…
  • But as streaming gained more traction…
  • The API evolved to support more of the devices that were getting built. The 1,000 flowers were still supported as well, but as the devices ramped up, they became a bigger focus.
  • Meanwhile, the balance of requests by audience had completely flipped. Overwhelmingly, the majority of traffic was coming from Netflix-ready devices and a shrinking percentage was from the 1,000 flowers. Today, the 1,000 flowers accounts for less than 0.1% of the API traffic.
  • I like to think of the Netflix engineering teams that support development and innovation for Discovery as being shaped like an hourglass…
  • In the top end of the hourglass, we have our device and UI teams who build out great user experiences on Netflix-branded devices. There are currently more than 1000 different device types that we support. To put that into perspective, there are a few hundred more device types that we support than engineers at Netflix.
  • At the bottom end of the hourglass, there are several dozen dependency teams who focus on things like metadata, algorithms, authentication services, A/B test engines, etc.
  • The API is at the center of the hourglass, acting as a broker of data.
  • As the audience of the API has changed, so did its use cases. We started to realize that the original design for the API was not as effective as it could be in satisfying the newer, more complicated and more business-critical users (the device UI teams). We began inspecting the various ways in which the system was creating problems for us so we can create a more effective design.
  • We did a significant review of the API and focused our discussion on these three areas.
  • With the adoption of the devices, API traffic took off! We went from about 600 million requests per month to about 42 BILLION requests in just two years.
  • Today, we are doing more than 2B incoming requests per day. That kind of growth and those kinds of numbers seem great. Who wouldn’t want those numbers, right?
  • Especially if you are an organization like NPR serving web pages that have ads on them. If NPR.org was serving 2B requests a day, each one of those requests would create impressions for the ad which translates into revenue (and potentially increased CPM at those levels).
  • But the API traffic is not serving pages with ads. Rather, we are delivering documents like this, in the form of XML…
  • Or like this, in the form of JSON.
  • Growth in traffic, especially if it were to continue at this rate, does not directly translate into revenue. Instead, it is more likely to translate into costs. Supporting massive traffic requires major infrastructure to support the load, expenses in delivering the bits, engineering costs to build and support more complex systems, etc.
  • So our first realization was that we could potentially significantly reduce the chattiness between the devices and the API while maintaining the same or better user experience. Rather than handling 2 billion requests per day, could we have the same UI at 300 million instead? Or less? Could having more optimized delivery of the metadata improve the performance and experience for our customers as well?
  • With more than 1000 different device types supported, we learned that the variability across them can also play a role in some of that chattiness. Different devices have different characteristics and capabilities that could influence the interaction model with the API.
  • For example, screen size could significantly affect what the API should deliver to the UI. TVs with bigger screens that can potentially fit more titles and more metadata per title than a mobile phone. Do we need to send all of the extra bits for fields or items that are not needed, requiring the device itself to drop items on the floor? Or can we optimize the deliver of those bits on a per-device basis?
  • Different devices have different controlling functions as well. For devices with swipe technologies, such as the iPad, do we need to pre-load a lot of extra titles in case a user swipes the row quickly to see the last of 500 titles in their queue? Or for up-down-left-right controllers, would devices be more optimized by fetching a few items at a time when they are needed? Other devices support voice or hand gestures or pointer technologies. How might those impact the user experience and therefore the metadata needed to support them?
  • The technical specs on these devices differ greatly. Some have significant memory space while others do not, impacting how much data can be handled at a given time. Processing power and hard-drive space could also play a role in how the UI performs, in turn potentially influencing the optimal way for fetching content from the API. All of these differences could result in different potential optimizations across these devices.
  • Finally, the OSFA model also seemed to slow the innovation rate of our various UI teams (as well as the API team itself). This became one of the most important considerations in our research.
  • Many UI teams needing metadata means many featurerequests of the API team. In the OSFA world, we essentially needed to funnel these requests and then prioritize them. That means that some teams would need to wait for API work to be done. It also meant that, because they all shared the same endpoints, we were often adding variations to the endpoints resulting in a more complex system as well as a lot of spaghetti code. Make teams wait due to prioritization was exacerbated by the fact that tasks took longer because the technical debt was increasing, causing time to build and test to increase. Moreover, many of the incoming requests were asking us to do more of the same kinds of customizations. This created a spiral that would be very difficult to break out of…
  • All of these aforementioned issues are essentially anomalies in the current OSFA paradigm. For us, these anomalies carve a path for a revolution (meaning, an opportunity for us to overthrow our current OSFA paradigm with a solution that makes up for the OSFA deficiencies).
  • We evolved our discussion towards what ultimately became a discussion between resource-based APIs and experience-based APIs.
  • The original OSFA API was very resource oriented with granular requests for specific data, delivering specific documents in specific formats.
  • The interaction model looked basically like this, with (in this example) the PS3 making many calls across the network to the OSFA API. The API ultimately called back to dependent services to get the corresponding data needed to satisfy the requests.
  • In this mode, there is a very clear divide between the Client Code and the Server Code. That divide is the network border.
  • And the responsibilities have the same distribution as well. The Client Code handles the rendering of the interface (as well as asking the server for data). The Server Code is responsible of gathering, formatting and delivering the data to the UIs.
  • And ultimately, it works. The PS3 interface looks like this and was populated by this interaction model.
  • But we believe this is not the optimal way to handle it. In fact, assembling a UI through many resource-based API calls is akin to pointillism paintings. The picture looks great when fully assembled, but it is done by assembling many points put together in the right way.
  • We have decided to pursue an experience-based approach instead. Rather than making many API requests to assemble the PS3 home screen, the PS3 will potentially make a single request to a custom, optimized endpoint.
  • In an experience-based interaction, the PS3 can potentially make asingle request across the network border to a scripting layer (currently Groovy), in this example to provide the data for the PS3 home screen. The call goes to a very specific, custom endpoint for the PS3 or for a shared UI. The Groovy script then interprets what is needed for the PS3 home screen and triggers a series of calls to the Java API running in the same JVM as the Groovy scripts. The Java API is essentially a series of methods that individually know how to gather the corresponding data from the dependent services. The Java API then returns the data to the Groovy script who then formats and delivers the very specific data back to the PS3.
  • In this model, the border between Client Code and Server Code is no longer the network border. It is now back on the server. The Groovy is essentially a client adapter written by the client teams.
  • And the distribution of work changes as well. The client teams continue to handle UI rendering, but now are also responsible for the formatting and delivery of content. The API team, in terms of the data side of things, is responsible for the data gathering and hand-off to the client adapters. Of course, the API team does many other things, including resiliency, scaling, dependency interactions, etc. This model is essentially a platform for API development.
  • If resource-based APIs assemble data like pointillism, experience-based APIs assemble data like a photograph. The experience-based approach captures and delivers it all at once.
  • The traditional model is to have systems administrators go into server rooms like this one to build out new servers, etc.
  • Rather than relying on data centers, we have moved everything to the cloud! Enables rapid scaling with relative ease. Adding new servers, in new locations, take minutes. And this is critical when the service needs to grow from 1B requests a month to 2B requests a day in a relatively short period of time.
  • Instead of going into server rooms, we go into a web page like this one. Within minutes, we can spin up new servers to support growing demands.
  • Throughautoscaling in the cloud, we can also dynamically grow our server farm in concert with the traffic that we receive.
  • Typically, server farms need to be built to handle peaks in the traffic. That is, they need to persist at levels higher than needed at peak load.
  • Instead of buying new servers based on projected spikes in traffic and having systems administrators add them to the farm, the cloud can dynamically and automatically add and remove servers based on need.
  • Moreover, we now have more than 36 million global subscribers in more than 50 countries and territories. Through the cloud, we are also able to put servers in locations where our customers are, all leveraging the AWS data centers.
  • And as new AWS regions surface, or our need to leverage them increases, we can relatively easily spin up instances in those regions.
  • As a general practice, Netflix focuses on getting code into production as quickly as possible to expose features to new audiences.
  • That said, we do spend a lot of time testing. We have just adopted some new techniques to help us learn more about what the new code will look like in production.
  • Two such examples are canary deployments and what we call red/black deployments.
  • The canary deployments are comparable to canaries in coal mines. We have many servers in production running the current codebase. We will then introduce a single (or perhaps a few) new server(s) into production running new code. Monitoring the canary servers will show what the new code will look like in production.
  • If the canary encounters problems, it will register in any number of ways.
  • If the canary shows errors, we pull it/them down, re-evaluate the new code, debug it, etc.
  • We will then repeat the process until the analysis of canary servers look good.
  • If the new code looks good in the canary, we can then use a technique that we call red/black deployments to launch the code. Start with red, where production code is running. Fire up a new set of servers (black) equal to the count in red with the new code.
  • Then switch the pointer to have external requests point to the black servers.
  • If a problem is encountered from the black servers, it is easy to rollback quickly by switching the pointer back to red. We will then re-evaluate the new code, debug it, etc.
  • Once we have debugged the code, we will put another canary up to evaluate the new changes in production.
  • If the new code looks good in the canary, we can then bring up another set of servers with the new code.
  • Then we will switch production traffic to the new code.
  • Then switch the pointer to have external requests draw from the black servers. If everything still looks good, we disable the red servers and the new code becomes the new red servers.
  • At Netflix, we have a range of engineering teams who focus on specific problem sets. Some teams focus on creating rich presentation layers on various devices. Others focus on metadata and algorithms. For the streaming application to work, the metadata from the services needs to make it to the devices. That is where the API comes in. The API essentially acts as a broker, moving the metadata from inside the Netflix system to the devices in customers’ homes.
  • Given the position of the API within the overall system, the API depends on a large number of underlying systems (only some of which are represented here). Moreover, a large number of devices depend on the API (only some of which are represented here). Sometimes, one of these underlying systems experiences an outage.
  • In the past, such an outage could result in an outage in the API.
  • And if that outage cascades to the API, it is likely to have some kind of substantive impact on the devices. The challenge for the API team is to be resilient against dependency outages, to ultimately insulate Netflix customers from low level system problems.
  • To protect our customers from this problem, we created Hystrix (which is now available on our Open Source site). Hystrix is a fault tolerance and resiliency wrapper than isolates dependencies and allows us to treat them differently as problems arise.
  • This is a view of the dashboard that shows some of our dependencies. This dashboard, as well as Turbine, is available in our open source site as well. This dashboard is used as a visualization of the health and traffic of each dependency.
  • This is a view of asingle circuit.
  • This circle represents the call volume and health of the dependency over the last 10 seconds. This circle is meant to be a visual indicator for health. The circle is green for healthy, yellow for borderline, and red for unhealthy. Moreover, the size of the circle represents the call volumes, where bigger circles mean more traffic.
  • The blue line represents the traffic trends over the last two minutes for this dependency.
  • The green number shows the number of successful calls to this dependency over the last two minutes.
  • The yellow number shows the number of latent calls into the dependency. These calls ultimately return successful responses, but slower than expected.
  • The blue number shows the number of calls that were handled by the short-circuited fallback mechanisms. That is, if the circuit gets tripped, the blue number will start to go up.
  • The orange number shows the number of calls that have timed out, resulting in fallback responses.
  • The purple number shows the number of calls that fail due to queuing issues, resulting in fallback responses.
  • The red number shows the number of exceptions, resulting in fallback responses.
  • The error rate is calculated from the total number of error and fallback responses divided by the total number calls handled.
  • If the error rate exceeds a certain number, the circuit to the fallback scenario is automatically opened. When it returns below that threshold, the circuit is closed again.
  • The dashboard also shows host and cluster information for the dependency…
  • As well as information about our SLAs.
  • So, going back to the engineering diagram…
  • If that same service fails today…
  • We simply disconnect from that service.
  • And replace it with an appropriate fallback. In some cases, the fallback is a degraded set of data, in other cases it could be a fast fail 5xx response code. In all cases, our goal is to ensure that an issue in one service does not result in queued up requests that can create further latencies and ultimately bring down the entire system.
  • Ultimately, this allows us to keep our customers happy, even if the experience may be slightly degraded. It is important to note that different dependency libraries have different fallback scenarios. And some are more resilient than others. But the overall sentiment here is accurate at a high level.
  • Underneath all of these technical solutions are exceptional engineers operating within an exceptional culture of Freedom and Responsibility. To learn more about how Netflix engineering works, check out our culture slides at http://www.netflix.com/jobs

Netflix API Presentation Transcript

  • 1. djacobson@netflix.com@daniel_jacobsonhttp://www.linkedin.com/in/danieljacobsonhttp://www.slideshare.net/danieljacobsonDaniel JacobsonDirector of Engineering, Netflix API
  • 2. There are comments on eachslide in the Notes field belowproviding the full context ofthis presentation
  • 3. Techniques for Scaling the Netflix APIAgenda• Intro to Netflix• History and Future of the Netflix API• The Cloud• Deployment and Testing• Resiliency
  • 4. Techniques for Scaling the Netflix APIAgenda• Intro to Netflix• History and Future of the Netflix API• The Cloud• Development and Testing• Resiliency
  • 5. Build the Best Global Streaming Product
  • 6. More than 36 Million SubscribersMore than 50 Countries & Territories
  • 7. Netflix Accounts for 33% of PeakInternet Traffic in North AmericaNetflix subscribers are watching more than 1 billion hours a month
  • 8. Coming to Netflix on May 26th
  • 9. 1,000+ DifferentDevice Types
  • 10. Build the Best Global Streaming ProductTwo aspects of the Streaming Product:• Discovery• Streaming
  • 11. Discovery
  • 12. Discovery
  • 13. Streaming
  • 14. Netflix API Powers Discovery
  • 15. Techniques for Scaling the Netflix APIAgenda• Intro into Netflix• History and Future of the Netflix API• The Cloud• Development and Testing• Resiliency
  • 16. 2007
  • 17. Netflix REST API:One-Size-Fits-All Solution
  • 18. Netflix API Requests by AudienceAt Launch In 2008ExternalDevelopers
  • 19. Netflix API Requests by AudienceFrom 2011ExternalDevelopers
  • 20. 1000+ Device Types
  • 21. PersonalizationEngineUser InfoMovieMetadataMovieRatingsSimilarMoviesReviewsA/B TestEngineDozens of Dependencies
  • 22. PersonalizationEngineUser InfoMovieMetadataMovieRatingsSimilarMoviesAPIReviewsA/B TestEngine
  • 23. Change in Audience = Change in Design
  • 24. Redesign: Areas of Focus• Chattiness• Variability Across Devices• Innovation Rates
  • 25. Chattiness
  • 26. Growth of Netflix API Requests0.620.741.7-51015202530354045Jan-10 Jan-11 Jan-12RequestinBillions70x growth in two years!
  • 27. Growth of the Netflix API2 billion requests per dayExploding out to 14 billion dependency calls per day
  • 28. <catalog_titles><number_of_results>1140</number_of_results><start_index>0</start_index><results_per_page>10</results_per_page><catalog_title><id>http://api.netflix.com/catalog/titles/movies/60021896</id><title short="Star" regular="Star"></title><box_art small="http://alien2.netflix.com/us/boxshots/tiny/60021896.jpg"medium="http://alien2.netflix.com/us/boxshots/small/60021896.jpg"large="http://alien2.netflix.com/us/boxshots/large/60021896.jpg"></box_art><link href="http://api.netflix.com/catalog/titles/movies/60021896/synopsis"rel="http://schemas.netflix.com/catalog/titles/synopsis" title="synopsis"></link><release_year>2001</release_year><category scheme="http://api.netflix.com/catalog/titles/mpaa_ratings" label="NR"></category><category scheme="http://api.netflix.com/categories/genres" label="Foreign"></category><link href="http://api.netflix.com/catalog/titles/movies/60021896/cast"rel="http://schemas.netflix.com/catalog/people.cast" title="cast"></link><link href="http://api.netflix.com/catalog/titles/movies/60021896/screen_formats" rel="http://schemas.netflix.com/catalog/titles/screen_formats" title="screenformats"></link<link href="http://api.netflix.com/catalog/titles/movies/60021896/languages_and_audio" rel="http://schemas.netflix.com/catalog/titles/languages_and_audio"title="languages and audio"></link><average_rating>1.9</average_rating><link href="http://api.netflix.com/catalog/titles/movies/60021896/similars" rel="http://schemas.netflix.com/catalog/titles.similars" title="similars"></link><link href="http://www.netflix.com/Movie/Star/60021896" rel="alternate" title="webpage"></link></catalog_title><catalog_title><id>http://api.netflix.com/catalog/titles/movies/17985448</id><title short="Lone Star" regular="Lone Star"></title><box_art small="http://alien2.netflix.com/us/boxshots/tiny/17985448.jpg" medium="http://alien2.netflix.com/us/boxshots/small/17985448.jpg" large=""></box_art><link href="http://api.netflix.com/catalog/titles/movies/17985448/synopsis" rel="http://schemas.netflix.com/catalog/titles/synopsis" title="synopsis"></link><release_year>1996</release_year><category scheme="http://api.netflix.com/catalog/titles/mpaa_ratings" label="R"></category><category scheme="http://api.netflix.com/categories/genres" label="Drama"></category><link href="http://api.netflix.com/catalog/titles/movies/17985448/awards" rel="http://schemas.netflix.com/catalog/titles/awards" title="awards"></link><link href="http://api.netflix.com/catalog/titles/movies/17985448/format_availability" rel="http://schemas.netflix.com/catalog/titles/format_availability"title="formats"></link><link href="http://api.netflix.com/catalog/titles/movies/17985448/screen_formats" rel="http://schemas.netflix.com/catalog/titles/screen_formats" title="screenformats"></link><link href="http://api.netflix.com/catalog/titles/movies/17985448/languages_and_audio" rel="http://schemas.netflix.com/catalog/titles/languages_and_audio"title="languages and audio"></link><average_rating>3.7</average_rating><link href="http://api.netflix.com/catalog/titles/movies/17985448/previews" rel="http://schemas.netflix.com/catalog/titles/previews" title="previews"></link><link href="http://api.netflix.com/catalog/titles/movies/17985448/similars" rel="http://schemas.netflix.com/catalog/titles.similars" title="similars"></link><link href="http://www.netflix.com/Movie/Lone_Star/17985448" rel="alternate" title="webpage"></link></catalog_title></catalog_titles>
  • 29. {"catalog_title":{"id":"http://api.netflix.com/catalog/titles/movies/60034967","title":{"title_short":"Rosencrantz and Guildenstern Are Dead","regular":"Rosencrantz and Guildenstern Are Dead"},"maturity_level":60,"release_year":"1990","average_rating":3.7,"box_art":{"284pix_w":"http://cdn-7.nflximg.com/en_US/boxshots/ghd/60034967.jpg","110pix_w":"http://cdn-7.nflximg.com/en_US/boxshots/large/60034967.jpg","38pix_w":"http://cdn-7.nflximg.com/en_US/boxshots/tiny/60034967.jpg","64pix_w":"http://cdn-7.nflximg.com/en_US/boxshots/small/60034967.jpg","150pix_w":"http://cdn-7.nflximg.com/en_US/boxshots/150/60034967.jpg","88pix_w":"http://cdn-7.nflximg.com/en_US/boxshots/88/60034967.jpg","124pix_w":"http://cdn-7.nflximg.com/en_US/boxshots/124/60034967.jpg"},"language":"en","web_page":"http://www.netflix.com/Movie/Rosencrantz_and_Guildenstern_Are_Dead/60034967","tiny_url":"http://movi.es/ApUP9"},"meta":{"expand":["@directors","@bonus_materials","@cast","@awards","@short_synopsis","@synopsis","@box_art","@screen_formats","@"links":{"id":"http://api.netflix.com/catalog/titles/movies/60034967","languages_and_audio":"http://api.netflix.com/catalog/titles/movies/60034967/languages_and_audio","title":"http://api.netflix.com/catalog/titles/movies/60034967/title","screen_formats":"http://api.netflix.com/catalog/titles/movies/60034967/screen_formats","cast":"http://api.netflix.com/catalog/titles/movies/60034967/cast","awards":"http://api.netflix.com/catalog/titles/movies/60034967/awards","short_synopsis":"http://api.netflix.com/catalog/titles/movies/60034967/short_synopsis","box_art":"http://api.netflix.com/catalog/titles/movies/60034967/box_art","synopsis":"http://api.netflix.com/catalog/titles/movies/60034967/synopsis","directors":"http://api.netflix.com/catalog/titles/movies/60034967/directors","similars":"http://api.netflix.com/catalog/titles/movies/60034967/similars","format_availability":"http://api.netflix.com/catalog/titles/movies/60034967/format_availability"}}}
  • 30. Improve Efficiency of API RequestsCould it have been 300 million requests per day? Or less?(Assuming everything else remained the same)
  • 31. Variability Across Devices
  • 32. Screen Real Estate
  • 33. Controller
  • 34. Technical Capabilities
  • 35. Innovation Rates
  • 36. One-Size-Fits-AllAPIRequestRequestRequest
  • 37. Our Solution…
  • 38. Move away from theOne-Size-Fits-All API model
  • 39. Resource-Based APIvs.Experience-Based API
  • 40. Resource-Based Requests• /users/<id>/ratings/title• /users/<id>/queues• /users/<id>/queues/instant• /users/<id>/recommendations• /catalog/titles/movie• /catalog/titles/series• /catalog/people
  • 41. OSFA APIRECOMMENDATIONSMOVIEDATASIMILARMOVIESAUTHMEMBERDATAA/BTESTSSTART-UPRATINGSNetwork Border Network Border
  • 42. RECOMMENDATIONSMOVIEDATASIMILARMOVIESAUTHMEMBERDATAA/BTESTSSTART-UPRATINGSOSFA APINetwork Border Network BorderSERVER CODECLIENT CODE
  • 43. RECOMMENDATIONSMOVIEDATASIMILARMOVIESAUTHMEMBERDATAA/BTESTSSTART-UPRATINGSOSFA APINetwork Border Network BorderDATA GATHERING,FORMATTING,AND DELIVERYUSER INTERFACERENDERING
  • 44. Experience-Based Requests• /ps3/homescreen
  • 45. JAVA APINetwork Border Network BorderRECOMMENDATIONSMOVIEDATASIMILARMOVIESAUTHMEMBERDATAA/BTESTSSTART-UPRATINGSGroovy Layer
  • 46. RECOMMENDATIONSAZXSXX CCCCMOVIEDATASIMILARMOVIESAUTHMEMBERDATAA/BTESTSSTART-UPRATINGSJAVA APISERVER CODECLIENT CODECLIENT ADAPTER CODE(WRITTEN BY CLIENT TEAMS, DYNAMICALLY UPLOADED TO SERVER)Network Border Network Border
  • 47. RECOMMENDATIONSAZXSXX CCCCMOVIEDATASIMILARMOVIESAUTHMEMBERDATAA/BTESTSSTART-UPRATINGSJAVA APIDATA GATHERINGDATA FORMATTINGAND DELIVERYUSER INTERFACERENDERINGNetwork Border Network Border
  • 48. Techniques for Scaling the Netflix APIAgenda• Intro to Netflix• History and Future of the Netflix API• The Cloud• Development and Testing• Resiliency
  • 49. AWS Cloud
  • 50. Autoscaling
  • 51. Autoscaling
  • 52. Autoscaling
  • 53. Techniques for Scaling the Netflix APIAgenda• Intro to Netflix• History and Future of the Netflix API• The Cloud• Development and Testing• Resiliency
  • 54. Development / TestingPhilosophyAct fast, react fast
  • 55. That Doesn’t Mean We Don’t Test• Unit tests• Functional tests• Regression scripts• Continuous integration• Capacity planning• Load / Performance tests
  • 56. Cloud-Based Deployment Techniques
  • 57. Current CodeIn ProductionAPI Requests fromthe Internet
  • 58. Current CodeIn ProductionAPI Requests fromthe InternetSingle Canary InstanceTo Test New Code with Production Traffic(around 1% or less of traffic)Error!
  • 59. Current CodeIn ProductionAPI Requests fromthe Internet
  • 60. Current CodeIn ProductionAPI Requests fromthe InternetSingle Canary InstanceTo Test New Code with Production Traffic(around 1% or less of traffic)Perfect!
  • 61. Current CodeIn ProductionNew CodeGetting Prepared for ProductionAPI Requests fromthe Internet
  • 62. Old CodePrepared For RollbackCurrent CodeIn ProductionAPI Requests fromthe InternetError!
  • 63. Old CodeRolled Back into ProductionNew CodeOut of ProductionAPI Requests fromthe Internet
  • 64. Current CodeIn ProductionAPI Requests fromthe InternetPerfect!
  • 65. Current CodeIn ProductionNew CodeGetting Prepared for ProductionAPI Requests fromthe Internet
  • 66. Old CodePrepared For RollbackCurrent CodeIn ProductionAPI Requests fromthe Internet
  • 67. Current CodeIn ProductionAPI Requests fromthe Internet
  • 68. Techniques for Scaling the Netflix APIAgenda• Intro into Netflix• History and Future of the Netflix API• The Cloud• Development and Testing• Resiliency
  • 69. PersonalizationEngineUser InfoMovieMetadataMovieRatingsSimilarMoviesAPIReviewsA/B TestEngine
  • 70. PersonalizationEngineUser InfoMovieMetadataMovieRatingsSimilarMoviesAPIReviewsA/B TestEngine
  • 71. PersonalizationEngineUser InfoMovieMetadataMovieRatingsSimilarMoviesAPIReviewsA/B TestEngine
  • 72. PersonalizationEngineUser InfoMovieMetadataMovieRatingsSimilarMoviesAPIReviewsA/B TestEngine
  • 73. Circuit Breaker Dashboard & Turbine(also open sourced on GitHub)
  • 74. Call Volume and Health / Last 10 Seconds
  • 75. Call Volume / Last 2 Minutes
  • 76. Successful Requests
  • 77. Successful, But Slower Than Expected
  • 78. Short-Circuited Requests, Delivering Fallbacks
  • 79. Timeouts, Delivering Fallbacks
  • 80. Thread Pool & Task Queue Full, Delivering Fallbacks
  • 81. Exceptions, Delivering Fallbacks
  • 82. Error Rate# + # + # + # / (# + # + # + # + #) = Error Rate
  • 83. Status of Fallback Circuit
  • 84. Requests per Second, Over Last 10 Seconds
  • 85. SLA Information
  • 86. PersonalizationEngineUser InfoMovieMetadataMovieRatingsSimilarMoviesAPIReviewsA/B TestEngine
  • 87. PersonalizationEngineUser InfoMovieMetadataMovieRatingsSimilarMoviesAPIReviewsA/B TestEngine
  • 88. PersonalizationEngineUser InfoMovieMetadataMovieRatingsSimilarMoviesAPIReviewsA/B TestEngine
  • 89. PersonalizationEngineUser InfoMovieMetadataMovieRatingsSimilarMoviesAPIReviewsA/B TestEngineFallback
  • 90. PersonalizationEngineUser InfoMovieMetadataMovieRatingsSimilarMoviesAPIReviewsA/B TestEngineFallback
  • 91. One Last Point…
  • 92. http://www.netflix.com/jobs
  • 93. Feel free to contact me at:djacobson@netflix.com@daniel_jacobsonhttp://www.linkedin.com/in/danieljacobsonhttp://www.slideshare.net/danieljacobson