APIs are key to making every business a digital business. Businesses need APIs to connect with partners and customers, at any time, on any device, and to participate in the digital ecosystems. To be digital, a scalable flexible API infrastructure is required.
Watch this Demo of Apigee Edge to learn how to:
- Easily configure and manage new APIs and enforce security with minimal impact to backend services
- Create, manage and monetize API products
- Extend API Services to increase flexibility and tailor to business requirements with JavaScript, Java, Python, and Node.js
- Provide developers easy, yet secure access to explore, test, and deploy APIs
- Use end-to-end visibility across the digital value chain to monitor, measure, and manage success. with unified operational, developer, app performance, and business metrics
Apigee Edge enables digital business acceleration with a unified and complete platform, purpose-built for the digital economy. Edge simplifies managing the entire digital value chain with API Services, Developer Services, and Analytics Services.
Watch Video: https://youtu.be/O_qiZoPswWU
Download Podcast: http://bit.ly/18YbGeS
APIs are key to making every business a digital business. Businesses need APIs to connect with partners and customers, at any time, on any device, and to participate in the digital ecosystems. To be digital, a scalable flexible API infrastructure is required.
Watch this Demo of Apigee Edge to learn how to:
- Easily configure and manage new APIs and enforce security with minimal impact to backend services
- Create, manage and monetize API products
- Extend API Services to increase flexibility and tailor to business requirements with JavaScript, Java, Python, and Node.js
- Provide developers easy, yet secure access to explore, test, and deploy APIs
- Use end-to-end visibility across the digital value chain to monitor, measure, and manage success. with unified operational, developer, app performance, and business metrics
Apigee Edge enables digital business acceleration with a unified and complete platform, purpose-built for the digital economy. Edge simplifies managing the entire digital value chain with API Services, Developer Services, and Analytics Services.
Watch Video: https://youtu.be/O_qiZoPswWU
Download Podcast: http://bit.ly/18YbGeS
A quick overview of API Design Workflow, describing my views on waterfall API design approach, why we've built Apiary a certain way and random notes from the API industry
AWS Summit - Trends in Advanced Monitoring for AWS environmentsAndreas Grabner
Why you have to rethink your monitoring strategy when moving or building apps for new stack cloud based environments:
#1: Why "the old way" of monitoring doesnt work any longer!
#2: How the Cloud and New Stack has transformed Dynatrace!
#3: How Dynatrace Redefined Monitoring for Cloud Applications
Put down your buzzword bingo cards. Martin Buhr, Creator and CEO of Tyk API Management Platform, is here to tell you why boring really is best when it comes to your API Strategy.
In a tech world that’s brimming with modern technologies (each pushed as the next best thing to watching a couple argue in public), Martin makes his case for simple over sensational when it comes to managing your APIs.
In his 20 minute polemic – ahem, we mean talk, he’ll make you embrace the mundane, savour the humdrum, and see beauty in the blah.
With a tech talk that promises to throw a little history, pop culture, and, most likely, philosophy into the day’s API discussions, it will be nothing if not entertaining. So here’s to boring, but not being bored.
As government digital strategy becomes more and more pertinent, innovation is the name of the game for the success of today's agency. But how do you continue to innovate despite the constraints of reduced IT budgets? How do you identify and overcome inefficiencies in architectures that increase costs and constrain growth?
In this webcast, Apigee's Brian Pagano and John Rethans discuss how to cut your agency's IT costs.
Pain Points In API Development? They’re EverywhereNordic APIs
There’s an inherent tension for organizations doing API development: how to keep both your API developers as well as your infrastructure happy, at the same time. Decoupling front-end and back-end development allows parallel development, and helps keep your front-end, middle-end, and back-end efforts working asynchronously. This speeds progress, but requires far more – and far better – collaboration to be successful. Even an independent developer working with APIs requires good collaboration tools.
In this talk, Abhinav Asthana will provide tips on how to improve in API development using collaboration tools like executable API descriptions, API mock servers, and documentation. He will include specific examples of how companies (such as VMware, Coursera, and AMC Theatres) have used collaboration to attain more agile development, to onboard developers, and to ensure input from all participants/stakeholders.
apidays LIVE Paris 2021 - Automating API Documentation by Ajinkya Marudwar, G...apidays
apidays LIVE Paris 2021 - APIs and the Future of Software
December 7, 8 & 9, 2021
Automating API Documentation
Ajinkya Marudwar, Sr. Technical Writer at GS Lab
Watch the live demo of Apigee Edge to learn how to:
- Easily configure and manage new APIs and enforce security with minimal impact to backend services
- Create, manage and monetize API products
- Extend API Services to increase flexibility and tailor to business requirements with JavaScript, Java, Python, and Node.js
- Provide developers easy, yet secure access to explore, test, and deploy APIs
Use end-to-end visibility across the digital value chain to monitor, measure, and manage success, with unified operational, developer, app performance, and business metrics
Hear the podcast version here: http://bit.ly/1zzXy2B
In this webcast, John Calagaz, CTO of CentraLite and Abhi Rele discuss how an API-centric approach helps developers realize the promise of the connected IoT world.
- challenges that prevent devices from interoperating
- using REST APIs to program multiple devices and access sensor and actuator data
- how developers can use the data from devices to demonstrate real value
Listen to the podcast version here: http://bit.ly/1GCWDAs
Humana digitally transforming health and well-being with Pivotal cloud foundr...Dynatrace
Humana has been on the leading edge of technology in the healthcare industry for some time now, particularly with the creation of their Digital Experience Center which features Pivotal Cloud Foundry (PCF) and with the maturation of their IT Command Center which features the breadth of Dynatrace’s product line. In this session we will learn a number of things, starting with why Humana chose PCF as a key platform to power their digital transformation from legacy software development practices towards modern software engineering. The acceleration of this culture change and new methodology exposed gaps in existing monitoring practices and tools, and we'll learn how Dynatrace via the Full-stack Add-on for Pivotal Cloud Foundry was able to fill those gaps. With unique visibility into the entire environment from the user experience to the underlying VMs and everything in-between, Dynatrace provided an intuitive view which simplified problem determination in the and transformed how Humana manages service availability for its customers. Finally we'll learn about Humana's strategy for the future with public cloud deployment, continuous monitoring with DevOps, and monitoring tool consolidation on the horizon.
DOES16 San Francisco - Marc Ng - SAP’s DevOps Journey: From Building an App t...Gene Kim
SAP’s DevOps Journey: From Building an App to Building a Cloud
Marc Ng, Cloud Infrastructure Engineering & Automation, SAP
SAP has been using a DevOps & Continuous Delivery approach for building its web and mobile apps for several years, and is now building and running a global cloud at the scale needed to support the digital transformation needs of its customers. This talk recaps the story of how SAP originally adopted DevOps practices before moving on to describe how the Cloud Infrastructure Services team is building and operating its 3rd generation cloud automation system using microservices, containers and open-source software.
DevOps Enterprise Summit San Francisco 2016
Analytics are key to unlocking the potential of the data in your digital ecosystem. Learn how Apigee Analytics provides end-to-end visibility into your business with the ability to analyze 360 degrees of information from API programs, external online sources, and your internal systems. Discover the elements that make this 360 degree visibility happen, including data from the APIs, data that adds context to the API data, and analytics that model and predict both business and operational metrics.
APIdays Paris 2018 - Make a building smart with API and serverless microservi...apidays
Make a building smart with API and serverless microservices
Sebastien Bergougnoux, CEO, Devoteam NexDigital
Apply to be a speaker here - https://apidays.typeform.com/to/J1snsg
In a fragmented mobile landscape, developing mobile applications can be challenging, especially when creating enterprise mobile applications. Targeting the wrong audience, lack of security and a good integration can introduce surprises and pitfalls along your enterprise mobile journey. During this session, we will cover those enterprise mobility challenges by explaining and exploring MADP with its benefits, such as delivering fully native apps with 60-90% re-use of code across device platforms, decreasing test time by 90% and app project costs by 40% and the possibility of building fully reusable components in JavaScript.
Microservices in action: How to actually build them3scale
Andrzej from the 3scale team gave this talk during the API Meetup Barcelona about how to practically build microservices using AWS Lambda, Amazon API Gateway, the JAWS framework and 3scale API Management.
Here is more info about the meetup:
http://www.meetup.com/API-Meetup-Barcelona/events/226165254/
Talk about the Netflix API and how it serves as the front door for Netflix device UIs. Topics include: API design, resiliency patterns, scalability, and enabling fast dev/deploy cycles.
A quick overview of API Design Workflow, describing my views on waterfall API design approach, why we've built Apiary a certain way and random notes from the API industry
AWS Summit - Trends in Advanced Monitoring for AWS environmentsAndreas Grabner
Why you have to rethink your monitoring strategy when moving or building apps for new stack cloud based environments:
#1: Why "the old way" of monitoring doesnt work any longer!
#2: How the Cloud and New Stack has transformed Dynatrace!
#3: How Dynatrace Redefined Monitoring for Cloud Applications
Put down your buzzword bingo cards. Martin Buhr, Creator and CEO of Tyk API Management Platform, is here to tell you why boring really is best when it comes to your API Strategy.
In a tech world that’s brimming with modern technologies (each pushed as the next best thing to watching a couple argue in public), Martin makes his case for simple over sensational when it comes to managing your APIs.
In his 20 minute polemic – ahem, we mean talk, he’ll make you embrace the mundane, savour the humdrum, and see beauty in the blah.
With a tech talk that promises to throw a little history, pop culture, and, most likely, philosophy into the day’s API discussions, it will be nothing if not entertaining. So here’s to boring, but not being bored.
As government digital strategy becomes more and more pertinent, innovation is the name of the game for the success of today's agency. But how do you continue to innovate despite the constraints of reduced IT budgets? How do you identify and overcome inefficiencies in architectures that increase costs and constrain growth?
In this webcast, Apigee's Brian Pagano and John Rethans discuss how to cut your agency's IT costs.
Pain Points In API Development? They’re EverywhereNordic APIs
There’s an inherent tension for organizations doing API development: how to keep both your API developers as well as your infrastructure happy, at the same time. Decoupling front-end and back-end development allows parallel development, and helps keep your front-end, middle-end, and back-end efforts working asynchronously. This speeds progress, but requires far more – and far better – collaboration to be successful. Even an independent developer working with APIs requires good collaboration tools.
In this talk, Abhinav Asthana will provide tips on how to improve in API development using collaboration tools like executable API descriptions, API mock servers, and documentation. He will include specific examples of how companies (such as VMware, Coursera, and AMC Theatres) have used collaboration to attain more agile development, to onboard developers, and to ensure input from all participants/stakeholders.
apidays LIVE Paris 2021 - Automating API Documentation by Ajinkya Marudwar, G...apidays
apidays LIVE Paris 2021 - APIs and the Future of Software
December 7, 8 & 9, 2021
Automating API Documentation
Ajinkya Marudwar, Sr. Technical Writer at GS Lab
Watch the live demo of Apigee Edge to learn how to:
- Easily configure and manage new APIs and enforce security with minimal impact to backend services
- Create, manage and monetize API products
- Extend API Services to increase flexibility and tailor to business requirements with JavaScript, Java, Python, and Node.js
- Provide developers easy, yet secure access to explore, test, and deploy APIs
Use end-to-end visibility across the digital value chain to monitor, measure, and manage success, with unified operational, developer, app performance, and business metrics
Hear the podcast version here: http://bit.ly/1zzXy2B
In this webcast, John Calagaz, CTO of CentraLite and Abhi Rele discuss how an API-centric approach helps developers realize the promise of the connected IoT world.
- challenges that prevent devices from interoperating
- using REST APIs to program multiple devices and access sensor and actuator data
- how developers can use the data from devices to demonstrate real value
Listen to the podcast version here: http://bit.ly/1GCWDAs
Humana digitally transforming health and well-being with Pivotal cloud foundr...Dynatrace
Humana has been on the leading edge of technology in the healthcare industry for some time now, particularly with the creation of their Digital Experience Center which features Pivotal Cloud Foundry (PCF) and with the maturation of their IT Command Center which features the breadth of Dynatrace’s product line. In this session we will learn a number of things, starting with why Humana chose PCF as a key platform to power their digital transformation from legacy software development practices towards modern software engineering. The acceleration of this culture change and new methodology exposed gaps in existing monitoring practices and tools, and we'll learn how Dynatrace via the Full-stack Add-on for Pivotal Cloud Foundry was able to fill those gaps. With unique visibility into the entire environment from the user experience to the underlying VMs and everything in-between, Dynatrace provided an intuitive view which simplified problem determination in the and transformed how Humana manages service availability for its customers. Finally we'll learn about Humana's strategy for the future with public cloud deployment, continuous monitoring with DevOps, and monitoring tool consolidation on the horizon.
DOES16 San Francisco - Marc Ng - SAP’s DevOps Journey: From Building an App t...Gene Kim
SAP’s DevOps Journey: From Building an App to Building a Cloud
Marc Ng, Cloud Infrastructure Engineering & Automation, SAP
SAP has been using a DevOps & Continuous Delivery approach for building its web and mobile apps for several years, and is now building and running a global cloud at the scale needed to support the digital transformation needs of its customers. This talk recaps the story of how SAP originally adopted DevOps practices before moving on to describe how the Cloud Infrastructure Services team is building and operating its 3rd generation cloud automation system using microservices, containers and open-source software.
DevOps Enterprise Summit San Francisco 2016
Analytics are key to unlocking the potential of the data in your digital ecosystem. Learn how Apigee Analytics provides end-to-end visibility into your business with the ability to analyze 360 degrees of information from API programs, external online sources, and your internal systems. Discover the elements that make this 360 degree visibility happen, including data from the APIs, data that adds context to the API data, and analytics that model and predict both business and operational metrics.
APIdays Paris 2018 - Make a building smart with API and serverless microservi...apidays
Make a building smart with API and serverless microservices
Sebastien Bergougnoux, CEO, Devoteam NexDigital
Apply to be a speaker here - https://apidays.typeform.com/to/J1snsg
In a fragmented mobile landscape, developing mobile applications can be challenging, especially when creating enterprise mobile applications. Targeting the wrong audience, lack of security and a good integration can introduce surprises and pitfalls along your enterprise mobile journey. During this session, we will cover those enterprise mobility challenges by explaining and exploring MADP with its benefits, such as delivering fully native apps with 60-90% re-use of code across device platforms, decreasing test time by 90% and app project costs by 40% and the possibility of building fully reusable components in JavaScript.
Microservices in action: How to actually build them3scale
Andrzej from the 3scale team gave this talk during the API Meetup Barcelona about how to practically build microservices using AWS Lambda, Amazon API Gateway, the JAWS framework and 3scale API Management.
Here is more info about the meetup:
http://www.meetup.com/API-Meetup-Barcelona/events/226165254/
Talk about the Netflix API and how it serves as the front door for Netflix device UIs. Topics include: API design, resiliency patterns, scalability, and enabling fast dev/deploy cycles.
Maintaining the Netflix Front Door - Presentation at Intuit MeetupDaniel Jacobson
This presentation goes into detail on the key principles behind the Netflix API, including design, resiliency, scaling, and deployment. Among other things, I discuss our migration from our REST API to what we call our Experienced-Based API design. It also shares several of our open source efforts such as Zuul, Scryer, Hystrix, RxJava and the Simian Army.
Are your APIs becoming too complicated and ad hoc? Feeling the need to set up policies for your API? This presentation will give you strategy options for designing and developing your APIs.
Lessons learned on the Azure API Stewardship Journey.pptxapidays
apidays LIVE Singapore 2022: Digitising at scale with APIs
April 20 & 21, 2022
Lessons learned on the Azure API Stewardship Journey
Adrian Hall, Principal Product Manager at Microsoft
------------
Check out our conferences at https://www.apidays.global/
Do you want to sponsor or talk at one of our conferences?
https://apidays.typeform.com/to/ILJeAaV8
Learn more on APIscene, the global media made by the community for the community:
https://www.apiscene.io
Explore the API ecosystem with the API Landscape:
https://apilandscape.apiscene.io/
Deep dive into the API industry with our reports:
https://www.apidays.global/industry-reports/
Subscribe to our global newsletter:
https://apidays.typeform.com/to/i1MPEW
How can companies like yours handle Salesforce data through high-performance access or blended reporting? Join us as we describe three ways in which Salesforce data integration can help you achieve lighting-fast business intelligence compatible with your favorite tools.
apidays LIVE New York 2021 - Service API design validation by Uchit Vyas, KPMGapidays
apidays LIVE New York 2021 - API-driven Regulations for Finance, Insurance, and Healthcare
July 28 & 29, 2021
Service API design validation
Uchit Vyas, Associate Director at KPMG
apidays LIVE Paris 2021 - Lessons from the API Stewardship Journey in Azure b...apidays
apidays LIVE Paris 2021 - APIs and the Future of Software
December 7, 8 & 9, 2021
Lessons from the API Stewardship Journey in Azure
Ryan Sweet, Principal Architect at Microsoft
This deck is an a joining of ideas from numerous visits to clients around the wound. Here we show the three most common design patterns and explain the pros and cons
One of the greatest challenges to developing an API is ensuring that your API lasts. After all, you don’t want to have to release and manage multiple versions of your API just because you weren’t expecting users to use it a certain way, or because you didn’t anticipate far enough down the roadmap. In this session, we’ll talk about the challenge of API Longevity, as well as ways to increase your API lifecycle including having a proper mindset, careful design, agile user experience and prototyping, best design practices including hypermedia, and the challenge of maintaining persistence.
APIs used to be a technical implementation detail reserved for developers and architects. In the Web age, APIs make more business sense than ever before. This presentation gives a ring side view of How to Craft Business Strategy around APIs.
apidays Paris 2022 - Sustainable API Green Score, Yannick Tremblais (Groupe R...apidays
apidays Paris 2022 - APIs the next 10 years: Software, Society, Sovereignty, Sustainability
December 14, 15 & 16, 2022
Sustainable API Green Score
Yannick Tremblais, IT Innovation Manager at Groupe Rocher & Julien Brun, Head of APIs Center of Excellence at L'Oréal
------
Check out our conferences at https://www.apidays.global/
Do you want to sponsor or talk at one of our conferences?
https://apidays.typeform.com/to/ILJeAaV8
Learn more on APIscene, the global media made by the community for the community:
https://www.apiscene.io
Explore the API ecosystem with the API Landscape:
https://apilandscape.apiscene.io/
Deep dive into the API industry with our reports:
https://www.apidays.global/industry-reports/
Subscribe to our global newsletter:
https://apidays.typeform.com/to/i1MPEW
As enterprises embrace APIs, some very specific Enterprise API Adoption patterns and best practices have started emerging. In this session, Laura Heritage, Principal Solutions Architect at SOA Software, will talk about the most common enterprise API patterns and will discuss how enterprises can successfully launch an API program.
Main focus of the talk is to communicate some key concepts of designing/implementing APIs based on an enterprise grade API Standards and Guidelines. We will try to handcraft few API recipes(i.e. implementation design) with real-life examples mixed with a live coding session. While working on each recipe, we will delve into the rationale behind design decisions and best practices. We believe that these concepts will help a developer build a comprehensive API solution from scratch.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
9. Target Audience Influence
• Team Identity
• Staffing Decisions
• System Architecture
• SLAs
• Development Velocity
• Security Needs
10.
11. Netflix API : Key Responsibilities
2008
• Broker data between internal services and
public developers
• Grow community of public developers
• Optimize design for reusability
14. Private API Public API
< 0.3% of total
API traffic *
* 11 years worth of public API requests = one day of private API requests
15. Netflix API : Key Responsibilities
Today
• Broker data between services and devices
• System resiliency
• Scaling the system
• High velocity development
• Insights
16. The consumers of the API are now
Netflix subscribers
We are now responsible for ensuring
subscribers can stream
20. Primary Responsibilities of APIs
• Data Gathering
– Retrieving the requested data from one or many local
or remote data sources
• Data Formatting
– Preparing a structured payload to the requesting
agent
• Data Delivery
– Delivering the structured payload to the requesting
agent
25. Why do most API providers provide
everything?
• Many APIs have a large set of unknown and
external developers
• Generic API design tends to be easier for
teams closer to the source
• Centralized API functions makes them easier
to support
26. Why do most API providers provide
everything?
• Many APIs have a large set of unknown and
external developers
• Generic API design tends to be easier for
teams closer to the source
• Centralized API functions makes them easier
to support
27. Data Gathering Data Formatting Data Delivery
API Consumer
Don’t care how data is
gathered, as long as it
is gathered
Each consumer cares a
lot about the format
for that specific use
Each consumer cares a
lot about how payload
is delivered
API Provider
Care a lot about how
the data is gathered
Only cares about the
format to the extent it
is easy to support
Only cares that the
delivery method is
easy to support
Separation of Concerns
To be a better provider, the API should address the
separation of concerns of the three core functions
29. Data Gathering Data Formatting Data Delivery
API Consumer
Don’t care how data is
gathered, as long as it
is gathered
Each consumer cares a
lot about the format
for that specific use
Each consumer cares a
lot about how payload
is delivered
API Provider
Care a lot about how
the data is gathered
Only cares about the
format to the extent it
is easy to support
Only cares that the
delivery method is
easy to support
Separation of Concerns
64. Personalization
Service
API
• Build
• Test
• Deploy Service
• Release Lib
Pers.
Lib
• Integrate Lib
• Build
• Test
• Deploy Service
UI Script
Iterations in Hours or Days
Access
Data
65. Personalization
Service
API
• Build
• Test
• Deploy Service
• Release Lib
• Publish to API
Pers.
Lib
UI Script
Iterations in Minutes?
Access
Data
• Integrate Lib
• Build
• Test
• Deploy Service
84. 1. Know Your Audience
2. Separation of Concerns
3. One Size Doesn’t Fit All
4. Be Pragmatic, Not
Dogmatic
5. Embrace Change
1. Act Fast, React Fast
2. Enable Others to Act
Fast, React Fast
3. Internal Developers
Need Engagement Too
4. Failure is Inevitable
5. Scale at All Costs
The lessons that we discuss in these slides fall into two buckets: strategy and implementation.
In some cases, the audience will be a small set of known developers (SSKDs). These developers are generally engineers within your company or one with whom you are partnering.
In other cases, the audience may be a large set of unknown developers. This audience is typically associated with public APIs.
And in some cases, the API will target both audience types.
This is a short list of the things that the target audience will influence.
For Netflix, we started out with a public API, with the audience being a large set of unknown developers. There were no internal use cases at launch.
Based on the target audience of unknown developers, we staffed accordingly. The team was relatively small, with skills around development, evangelism, partnering, testing and documentation.
As streaming became more critical to the company, we started having devices use the API. Our first mistake was that we were probably too late to pivot our architecture based on our change in target audience. At the time, we had many devices call into our REST API, the same one that we used for the unknown developers.
But eventually, the data demonstrated that the architectural change was needed. This chart shows that the private API completely drarfs the public API in terms of requests. The private API does about five billion requests per day while the public API does between one and two million. This disparity clearly demonstrates the need for us to target the API to the small set of known developers – Netflix’s UI engineers – who build the vast majority of the experiences on Netflix devices.
Given the shift in responsibilities, we positioned the team accordingly, hiring for skills mostly around engineering.
And the team size grew by about 6x in the last few years. If the target audience was still the public API, it is likely that the team size would have grown, but less significantly (perhaps 2x) in that time frame.
API consumers care a lot about data formatting and delivery, but each consumer, in such a diverse ecosystem, cares about them differently. For some devices, they may want an XML payload delivered as a complete document, while others may need JSON, protobuffer or some other format, potentially delivered as streamed bits. Because of these diverse needs, we need to separate out the concerns to better enable the consumers to get what they need.
Most companies focus on a small handful of device implementations, most notably Android and iOS devices.
At Netflix, we have more than 1,000 different device types that we support. Across those devices, there is a high degree of variability. As a result, we have seen inefficiencies and problems emerge across our implementations. Those issues also translate into issues with the API interaction.
For example, screen size could significantly affect what the API should deliver to the UI. TVs with bigger screens that can potentially fit more titles and more metadata per title than a mobile phone. Do we need to send all of the extra bits for fields or items that are not needed, requiring the device itself to drop items on the floor? Or can we optimize the deliver of those bits on a per-device basis? Different devices have different controllers as well. Some, like the iPad, allow for fast swipe interactions so the content needs to be there for the entire row. Other devices, like smart TVs or game some game consoles have LRUD controllers, so it at least gives the opportunity to fetch the data as the row gets navigated. And the technical capabilities of the devices will influence the interactions as well. Some have more computing power or memory which will influence how much data you can process on the device vs. how much needs to be gathered in real-time.
We evolved our discussion towards what ultimately became a discussion between resource-based APIs and experience-based APIs.
The original one-size-fits-all API was very resource oriented with granular requests for specific data, delivering specific documents in specific formats.
The interaction model looked basically like this, with (in this example) the PS3 making many calls across the network to the OSFA API. The API ultimately called back to dependent services to get the corresponding data needed to satisfy the requests.
We have decided to pursue an experience-based approach instead. Rather than making many API requests to assemble the PS3 home screen, the PS3 will potentially make a single request to a custom, optimized endpoint.
In an experience-based interaction, the PS3 can potentially make a single request across the network border to a scripting layer (currently Groovy), in this example to provide the data for the PS3 home screen. The call goes to a very specific, custom endpoint for the PS3 or for a shared UI. The Groovy script then interprets what is needed for the PS3 home screen and triggers a series of calls to the Java API running in the same JVM as the Groovy scripts. The Java API is essentially a series of methods that individually know how to gather the corresponding data from the dependent services. The Java API then returns the data to the Groovy script who then formats and delivers the very specific data back to the PS3.
Our original REST API had granular endpoints and generic interaction models. This leads to different versions when significant changes are made. The REST API had three primary version before our move to the experience-based API.
If we persisted in the REST API, we very likely could have continued to add versions while needing to support the old ones. The need to support prior versions stems from older device implementations that may not be able to updated or retired, thus forcing us to maintain these endpoints for a long time (perhaps as long as 10 years).
Our target with the experience-based API was to build an architecture that allowed us to be versionless. Through SSKDs, separation of concerns, abstraction layers, and interaction optimizations, we are able move to a deprecation model.
The primary goal is to limit versioning in the device-to-server interaction. Ideally, we can deprecate effectively in the server interactions as well, but that is sometimes more difficult. Back to our architecture view, the data can now flow from the services into the Java APIs. We expose granular methods (think data elements rather than resources) to the scripting tier. If a method needs to change, we can add a new method and then work closely with the SSKDs to migrate the calling scripts, enabling us to deprecate the old method. If we are not able to move the scripts, we can insulate the devices from the change either in the Java layer or in the scripting tier.
Several years ago, we were deploying changes roughly every two weeks. We would accumulate changes over that time and then drop them into production all at once. Think of it as gathering water in a bucket.
What we found was that our releases were unpredictable, sometimes resulting in outages, broken functionality, or incomplete work. Accordingly, we decided to slow down, changing our release cycles to three weeks. We figured that would give us more time to test our work. In other words, we got a larger bucket.
Over time, however, we learned that the longer release cycle didn’t improve predictability or quality. Instead, it just slowed us down. In response, we moved aggressively towards continuous delivery. Instead of delivering water in buckets, we had a steady stream of water from a hose. This enabled us to have smaller changes, more isolated and testable, pushed to production instead of having bigger releases with more complexity.
This is how code flows through the system. We have multiple canary releases per day. Internal envs are deployed ~8 times/day in 3 AWS regions. Prod deployments happen 2-3 times/week and can be triggered on demand.
This dashboard lets us track the status of our master branch at any time. Builds that fail at any step in the pipeline are stopped from going further.
A quick word on Testing. We follow the ‘Operate what you Build’ model where developers are responsible for shepherding their changes all the way through to production. We provide them with the tools necessary to help them gain confidence in the quality of their code. One such tool is the automated Canary Analyzer.
Canary Analysis is the process wherein a small percent of traffic is routed to the new code and its performance is compared against the old code based on 1000s of metrics.
A detailed report gives further insight into potential problem areas. In this case, our canary gives a score of 87%, which means it is likely not ready for release.
In tandem with canaries, we use Red/Black deployments as well.
The Red/Black process allows us to run production code in one cluster while we spin up the new code in a second one. As the new code proves itself, we can route all traffic to it and eventually shut down the old. It also allows us to have a fast, automated rollback in the even that the new code is seeing problems.
Our architecture enables us to move faster because of the scripting tier. But this also put us in position to help our consuming teams and dependency teams to move faster as well.
Let’s peek under the hood of the API Server. Client teams deploy endpoints dynamically based on their own schedule. Their cycles are completely asynchronous of server deployments. Newly deployed endpoints are live and ready to take traffic within minutes.
Endpoint Activity Dashboard shows recent deployment activity. Rollbacks can be performed in a matter of minutes as well.
Our dependent services provide to us client libraries that get compiled into our JVM upon deployment. These libraries typically expose static interfaces, which means changes to the interfaces require coding and deployments without our contain. Similar to the dynamic endpoints, we also have opportunity to improve the nimbleness and velocity around these libraries.
One such improvement is dependency canaries, where we are evaluating our new code against the dependencies. This is a dashboard the provides insights into these canaries.
Making the interaction with the consumers of the API dynamic has led to increased agility on the UI side. We are also exploring ways to increase the speed of iteration on the dependencies side. The current interaction model uses static domain models and client libraries to handle the data flow through the API. This results in long iteration cycles for even the simplest of use cases. We are actively pursuing an approach where our dependencies will be able to expose new data by using dynamic pass-through model using a Dictionary of key values.
The idea is that this model will avoid the static update cycle on the API end, thereby resulting in shorter iteration cycles. This will require investment in things like safety checks and discoverability of the API. We are instrumenting the API layer to inspect traffic at runtime and provide insights into API usage.
One of the early mistakes that we made in this new architecture was not treating internal developers like we did public developers. We don’t need the same degree of evangelism, but we do need to maintain strong communications with the client teams while providing robust tools and systems to help them be better developers in our system. An example of us being late to this is represented by our endpoint dashboard. One of our teams went from having about 30 scripts to about 500 in a matter of weeks. Each of these scripts are dynamically compiled into the JVM, occupying permgen space. As the script count shot up, we hit limits in our permgen which resulted in an outage. And an outage in our layer means people cannot stream Netflix. Of course, there is nothing like an outage to kickstart new behaviors. As a result, we immediately set up alerts and then focused more heavily on building tools to support the developers.
Included in that effort is comprehensive documentation.
We built an array of tools as well, including this REPL.
And prepared frequent trainings and videos.
Nobody has a 100% SLA, so things will fail
In fact, a few years ago, we have many failures on a routine basis.
Many of those failures were a result of failures in a dependent service that we did a poor job of protecting against. Because we are the last step before delivering content to the customers, we have a unique opportunity to help protect customers from such failures.
Hystrix allows us to be resilient to failure by implementing the bulk-heading and circuit breaker patterns. Hystrix is open source and available at our github repository.
Failure Simulation and Game Day exercises are a key part of the overall story. The Simian army is a fleet of monkeys who are simulate failures and alert us to non-conformities in an automated manner. Chaos Monkey periodically terminates AWS instances in production to see how the system responds to the instance disappearing. Latency Monkey introduces latencies and errors into a service to see how it responds and lets us assess the customer quality of experience. Conformity monkey alerts us to variations in versions of application across regions. The monkeys are also available in our open source github repository
Because of our pivot to the private API and the explosion of devices consuming it, our traffic grew tremendously in a few years (and continues to grow at very fast rates). Scaling our systems to support this growth is absolutely critical to the success of the company. Techniques, such as throttling are not an option because that only serves to limit the interactions from our streaming subscribers. Instead, we need to be able to handle any load that our devices throw at us. This manifests in many ways, but the following is a detail on one of them – instance scaling.
Let’s go back to the traffic chart. The pattern is predictable with higher peaks on the weekends
To offset these limitations, we created Scryer (not yet open sourced, but in production at Netflix). Scryer evaluates needs based on historical data (week over week, month over month metrics), adjusts instance minimums based on algorithms, and relies on Amazon Auto Scaling for unpredicted events
This graph shows that Scryer’s predictions are in line with actual RPS. In production, Scryer allows us to get instances into production prior to the need (which is different than Amazon’s reactive autoscaling engine which triggers the ramp up based on immediate need, only needing to wait until server start-up is complete). Because the instances are there in advance, Scryer smooths out load averages and response times, which in turn improves the customer experience.
This is an example of what Scryer looks like during an outage. When actual traffic dropped because of an outage, the reactive autoscaling engine would have downsized the farm. In this case, Scryer kept the farm sized correctly so that we were able to deal with the traffic spike after the recovery.
As a side benefit (not the initial intent), Scryer also allows us to be more precise with our instance counts, reducing inefficiencies.