Talk Abstract
At Zalando we run 84 Kubernetes clusters in AWS. Ingress objects are
enough to provision ALBs, do advanced HTTP routing and create DNS
records. I will show how to support green-blue deployments, A/B
testing, shadow traffic and feature toggles with the current ingress
and our Open Source tools.
Talk Description
One of the hottest topics in Kubernetes is how to do ingress traffic
done right. This is an opinionated talk that shows how one of the
biggest online shops in Europe does it. All tools are Open Source and
can be used by the audience. Presented use cases are production
relevant.
notes
- Since 7 years I work as system and software engineer for Zalando.
- I work in the team that runs 84 Kubernetes clusters for Zalando.
- I am one of the core developers, which implements all the ingress features being presented.
App Mod 02: A developer intro to open shiftJudy Breedlove
This document describes OpenShift, a container application platform based on Kubernetes. It provides an overview of OpenShift concepts like pods, services, routes and projects. It then outlines a lab scenario where a developer will learn to use OpenShift features like quick deployments, separate dev and prod environments, and promoting apps between environments using CI/CD pipelines. The goal is for the developer to break up a monolithic CoolStore app into microservices using OpenShift tools and workflows.
Ansible Tower provides a visual dashboard and API to manage Ansible automation. The Phantom app for Ansible Tower allows Phantom to consume Ansible modules and playbooks without needing to write custom apps. It provides a remote triggered blackhole solution that uses Ansible playbooks launched from Phantom to configure routers to block malicious IPs. This allows leveraging existing Ansible content while providing a native Phantom solution.
Better Software is Better than Worse Software - Michael Coté (Cape Town 2019)VMware Tanzu
This document discusses the benefits of a consistent platform and product process for building cloud native applications. It provides examples from various companies that illustrate how adopting these practices can increase developer productivity, reduce costs, speed up release cycles, and improve software quality. Maintaining a consistent platform with tools like Pivotal Application Service, Pivotal Container Service, and services from the Pivotal marketplace allows companies to focus on building applications rather than infrastructure.
Alessandro Confetti - Learn how to build decentralized and serverless html5 a...Codemotion
Do you have an idea for a startup and don't want to pay for scaling it up? Forget about bandwidth problems, servers to install and pay for, with the power of IPFS and the blockchain. In this talk, we will explore how to build an HTML5 DAPP (distributed application) with EmbarkJS, and figure out how to rethink servers, storage, messaging, data and payments in a distributed and decentralised way with the help of Ethereum's smart contracts and IPFS distributed storage.
Deploy Prometheus on Kubernetes to monitor Containers. Containers are dynamic and often deployed in large quantities. In such an environment, monitoring is crucial to help with the overall health of the kubernetes environment. This tutorial explains how to deploy prometheus on Kubernetes.
Zero downtime deployment of micro-services with KubernetesWojciech Barczyński
Talk on deployment strategies with Kubernetes covering kubernetes configuration files and the actual implementation of your service in Golang and .net core.
You will find demos for recreate, rolling updates, blue-green, and canary deployments.
Source and demos, you will find on github: https://github.com/wojciech12/talk_zero_downtime_deployment_with_kubernetes
App Mod 02: A developer intro to open shiftJudy Breedlove
This document describes OpenShift, a container application platform based on Kubernetes. It provides an overview of OpenShift concepts like pods, services, routes and projects. It then outlines a lab scenario where a developer will learn to use OpenShift features like quick deployments, separate dev and prod environments, and promoting apps between environments using CI/CD pipelines. The goal is for the developer to break up a monolithic CoolStore app into microservices using OpenShift tools and workflows.
Ansible Tower provides a visual dashboard and API to manage Ansible automation. The Phantom app for Ansible Tower allows Phantom to consume Ansible modules and playbooks without needing to write custom apps. It provides a remote triggered blackhole solution that uses Ansible playbooks launched from Phantom to configure routers to block malicious IPs. This allows leveraging existing Ansible content while providing a native Phantom solution.
Better Software is Better than Worse Software - Michael Coté (Cape Town 2019)VMware Tanzu
This document discusses the benefits of a consistent platform and product process for building cloud native applications. It provides examples from various companies that illustrate how adopting these practices can increase developer productivity, reduce costs, speed up release cycles, and improve software quality. Maintaining a consistent platform with tools like Pivotal Application Service, Pivotal Container Service, and services from the Pivotal marketplace allows companies to focus on building applications rather than infrastructure.
Alessandro Confetti - Learn how to build decentralized and serverless html5 a...Codemotion
Do you have an idea for a startup and don't want to pay for scaling it up? Forget about bandwidth problems, servers to install and pay for, with the power of IPFS and the blockchain. In this talk, we will explore how to build an HTML5 DAPP (distributed application) with EmbarkJS, and figure out how to rethink servers, storage, messaging, data and payments in a distributed and decentralised way with the help of Ethereum's smart contracts and IPFS distributed storage.
Deploy Prometheus on Kubernetes to monitor Containers. Containers are dynamic and often deployed in large quantities. In such an environment, monitoring is crucial to help with the overall health of the kubernetes environment. This tutorial explains how to deploy prometheus on Kubernetes.
Zero downtime deployment of micro-services with KubernetesWojciech Barczyński
Talk on deployment strategies with Kubernetes covering kubernetes configuration files and the actual implementation of your service in Golang and .net core.
You will find demos for recreate, rolling updates, blue-green, and canary deployments.
Source and demos, you will find on github: https://github.com/wojciech12/talk_zero_downtime_deployment_with_kubernetes
The document discusses integrating collaboration and automation tools like Cisco Spark and Ansible for workflow management. It provides examples of using Spark APIs to automate tasks like creating rooms and adding people. It also demonstrates using Ansible to automate F5 configuration across multiple data centers and integrating with Spark to track task completion. The presentation shows how Ansible can be used with CSV files to simplify F5 configuration deployment for non-programmers. Automation helps improve staff productivity, communication, and customer experience while reducing costs.
Radical Agility with Autonomous Teams and Microservices in the CloudZalando Technology
A talk by software engineers Jan Löffler and Henning Jacobs on Zalando's adoption of microservices, cloud computing and autonomous teams. Zalando is Europe's largest online fashion platform, doing business in 15 countries with more than 15 million users. Visit tech.zalando.com for more information about Zalando's technology, open source projects and opportunities.
This document discusses implementing and testing a self-managed logging and visualization solution for a Kubernetes cluster. It considers tools like FluentD, Elasticsearch, Kibana, Helm, and Kops for collecting, processing, and visualizing logs. A turn-key deployment approach using Helm is recommended to install all stack components from a single chart and leverage dependencies. Concerns about authentication, capacity planning, and security hardening are noted for future improvement.
Learn how you'll be able to quickly develop, host, and scale applications within the AWS cloud with Red Hat's OpenShift. During this session, we walk you thru the straightforward method of deploying and managing your own Linux based application within the AWS cloud and will additionally discuss key use-cases and advantages to container platform configuration, deployment, and administration.
Flexible, hybrid API-led software architectures with KongSven Bernhardt
Kong is a lightweight, cloud-native API solution that makes it easier and faster than ever to connect APIs and microservices in today’s hybrid, multi-cloud environments. With its agnostic, flexible deployment approach, Kong can be used in today’s heterogeneous IT system landscapes to integrate a wide variety of data and systems – even across company boundaries – using APIs. In addition to REST APIs, Kong also offers support for gRPC and GraphQL, which broadens the possibilities to implement modern application architectures.
In this presentation, we will discuss deployment patterns and use cases for Kong to demonstrate the flexibility of the platform. Using a practical example, aspects of the API development and deployment process as well as the integration in existing software development processes will be discussed.
Containers vs serverless - Navigating application deployment optionsDaniel Krook
IBM presentation at the O'Reilly Open Source Convention Container Day in Austin, Texas on May 9, 2017.
https://conferences.oreilly.com/oscon/oscon-tx/public/schedule/detail/61403
New technologies seem to arrive fast and furious these days. We were just getting used to our new container world when serverless arrived. But is it better, faster, and cheaper, as the hype suggests?
Daniel Krook explores a real application packaged using popular open source container technology and walks you through a migration to an event-oriented serverless paradigm, discussing the trade-offs and pros and cons of each approach to application deployment and examining when serverless benefit applications and when it doesn’t.
You’ll learn considerations for using serverless API frameworks and how to reuse some of your containerization strategy as you move from more traditional application models to an event-driven world.
Daniel Krook, Software Architect, IBM
The document discusses Daniel Ramos and his work with Kubernetes. It provides information on Daniel Ramos' background as a full stack developer and hummus lover who works for Acid Tango, a digital product studio based in Madrid and Tenerife. The document then discusses various Kubernetes concepts like pods, deployments, services, ingress controllers, and Helm. It also mentions some other related tools like Docker, GraphQL, gRPC, Zipkin, Istio, ELK, and Scryer.
This document discusses how to setup a telco in the cloud using open source technologies. It describes how the company X by Orange uses infrastructure as code practices like Git, Packer, Terraform, and Ansible to provision their cloud infrastructure immutably. They deploy applications as containers using OpenShift and monitor services with Prometheus and Netdata. The goal is to provide flexible online solutions to customers faster than traditional telcos by embracing a cloud native approach.
Building and Running Workloads the Knative WayQAware GmbH
Serverless Computing 2019, November 2019, London: Talk by Mario-Leander Reimer (@LeanderReimer, Principal Software Architect at QAware)
=== Please download slides if blurred! ===
Abstract: Knative is a K8s based platform to build, deploy, manage and run serverless workloads.
In this session we will take a look at the concepts of each Knative building block and apply them directly in practice. First, we’ll define and use Tekton pipelines to build our workloads. Then we’ll use Knative serving to rapidly deploy serverless containers with automatic scaling up and down to zero. Finally, we’ll show how to build loosely coupled event-driven architectures with the help of Knative eventing. This session will also cover the different installation options leveraging either Istio or the API gateways Gloo and Ambassador.
ContainerConf 2019, November 2019, Mannheim: Vortrag von Mario-Leander Reimer (@LeanderReimer, Cheftechnologe bei QAware)
== Dokument bitte herunterladen, falls unscharf! ==
Abstract:
Vor nicht allzu langer Zeit haben Microservice-Architekturen die Art und Weise, wie wir Softwaresysteme bauen, revolutioniert: Anstatt als Monolithen werden Systeme nun in Form autonomer Services komponiert und ausgeführt.
Serverless und FaaS sind die nächste logische Stufe in dieser Evolution, um die Komplexität in der Entwicklung und im Betrieb solcher Systeme zu reduzieren.
FaaS-Plattformen schießen derzeit wie Pilze aus dem Boden: Knative, OpenFaaS, Fission oder Nuclio sind nur einige Beispiele. Aber welche davon sind bereits geeignet für den Einsatz im nächsten Projekt? Lassen sich damit hybride Architekturen umsetzen oder muss es vollständig Functionless sein? Lasst es uns herausfinden.
This document discusses containerization and orchestration on Microsoft Azure. It provides an overview of moving traditional applications to modern applications using microservices and containers. It then discusses what containers are and how to develop Kubernetes applications. Finally, it outlines how Azure Kubernetes Service simplifies deploying and managing Kubernetes and allows running both Windows and Linux containers in the same cluster.
PaaS is dead, Long live PaaS - Defrag 2016brendandburns
This document discusses the evolution of platform as a service (PaaS) over time. It argues that PaaS is no longer defined by proprietary platforms, as open source tools now allow developers to build their own PaaS solutions using containers and orchestration. The future of PaaS involves a more distributed model where specialized PaaS are built for different domains, and businesses may pay for support, usage-based pricing, or fully-managed services rather than proprietary platforms. Overall, the document suggests that PaaS as a concept is still relevant even if the traditional definition is changing.
For a lot of companies it is a challenge to automate their development pipeline. We would like to talk about one possible solution based on gitlab and terraform. The infrastructure and development process is created around git repositories. With Terraform it is possible to code also parts of the infrastructure. So each change in the application and also in the infrastrucure can be tracked within git repositorities. This is a great effort also for the CI process. So it is possible to automate the whole testing and integration processes very easy.
The document discusses serverless computing and Apache OpenWhisk. It describes how OpenWhisk allows developers to focus on business logic rather than infrastructure by executing code in response to events in a serverless manner. OpenWhisk provides a programming model where developers can create actions to handle triggers via rules. A number of demos are presented showing how to create triggers, actions and rules with OpenWhisk to handle events and build REST APIs.
This document discusses moving two customer-facing applications, Haufe Instant Feedback and Haufe Agile Hats, from self-hosted to cloud-native architectures on AWS. It provides an overview of the architectures, which include separating the applications by product at the VPC level and using AWS Fargate for container orchestration without Kubernetes. The document outlines the security measures taken and continuous integration/delivery pipeline used to deploy updates from development to production environments on AWS.
This document discusses using GitLab CI/CD to provision and manage infrastructure with Terraform Cloud (TFC). It begins with an agenda that includes an introduction to Terraform and TFC, integrating them with GitLab, and demos of using GitLab CI/CD pipelines with TFC for infrastructure as code. It then provides bios of two presenters and discusses how GitLab offers a single platform to plan, code, test, secure and release applications. The document concludes by pointing to additional resources on using GitLab CI with Terraform.
Goodbye CLI, hello API: Leveraging network programmability in security incid...Joel W. King
Automation and Orchestration has been the purview of cloud computing and system administration, but now is increasingly important to security operations and network administration. By automating the data collection and corrective action component of incident response, significant time savings can be realized. Corrective actions often need be applied to multiple assets in the organization and automation improves consistency and time savings as well. This talk describes how security and IT orchestration can be integrated through code reuse and integration with APIs.
We demonstrate how Phantom and Ansible can be integrated to automate the incident response data collection, corrective action, and notification.
Building serverless applications with Apache OpenWhiskDaniel Krook
IBM presentation at the O'Reilly Open Source Convention in Austin, Texas on May 10, 2017.
https://conferences.oreilly.com/oscon/oscon-tx/public/schedule/detail/61295
Apache OpenWhisk on IBM Bluemix provides a powerful and flexible environment for deploying cloud-native applications driven by data, message, and API call events. Daniel Krook explains why serverless architectures are attractive for many emerging cloud workloads and when you should consider OpenWhisk for your next project. Daniel then shows you how to get started with OpenWhisk on Bluemix right away, using several samples on GitHub.
Daniel Krook, Software Architect, IBM
Modern HTTP routing with skipper adds visibility, deployment patterns blue-green, shadow traffic or simple A/B tests to your toolchain. It runs with different dataclients to pull route information from a source, for example Kubernetes ingress objects.
Discuss the basics of the AWS CDK with its pros and cons. Including how the Cloud Development Kit (CDK) helped overcome the challenges faced in their previous serverless IaC solution.
Github repo for the PoC Source Code: https://github.com/dtl-open/cdkpoc
The document discusses integrating collaboration and automation tools like Cisco Spark and Ansible for workflow management. It provides examples of using Spark APIs to automate tasks like creating rooms and adding people. It also demonstrates using Ansible to automate F5 configuration across multiple data centers and integrating with Spark to track task completion. The presentation shows how Ansible can be used with CSV files to simplify F5 configuration deployment for non-programmers. Automation helps improve staff productivity, communication, and customer experience while reducing costs.
Radical Agility with Autonomous Teams and Microservices in the CloudZalando Technology
A talk by software engineers Jan Löffler and Henning Jacobs on Zalando's adoption of microservices, cloud computing and autonomous teams. Zalando is Europe's largest online fashion platform, doing business in 15 countries with more than 15 million users. Visit tech.zalando.com for more information about Zalando's technology, open source projects and opportunities.
This document discusses implementing and testing a self-managed logging and visualization solution for a Kubernetes cluster. It considers tools like FluentD, Elasticsearch, Kibana, Helm, and Kops for collecting, processing, and visualizing logs. A turn-key deployment approach using Helm is recommended to install all stack components from a single chart and leverage dependencies. Concerns about authentication, capacity planning, and security hardening are noted for future improvement.
Learn how you'll be able to quickly develop, host, and scale applications within the AWS cloud with Red Hat's OpenShift. During this session, we walk you thru the straightforward method of deploying and managing your own Linux based application within the AWS cloud and will additionally discuss key use-cases and advantages to container platform configuration, deployment, and administration.
Flexible, hybrid API-led software architectures with KongSven Bernhardt
Kong is a lightweight, cloud-native API solution that makes it easier and faster than ever to connect APIs and microservices in today’s hybrid, multi-cloud environments. With its agnostic, flexible deployment approach, Kong can be used in today’s heterogeneous IT system landscapes to integrate a wide variety of data and systems – even across company boundaries – using APIs. In addition to REST APIs, Kong also offers support for gRPC and GraphQL, which broadens the possibilities to implement modern application architectures.
In this presentation, we will discuss deployment patterns and use cases for Kong to demonstrate the flexibility of the platform. Using a practical example, aspects of the API development and deployment process as well as the integration in existing software development processes will be discussed.
Containers vs serverless - Navigating application deployment optionsDaniel Krook
IBM presentation at the O'Reilly Open Source Convention Container Day in Austin, Texas on May 9, 2017.
https://conferences.oreilly.com/oscon/oscon-tx/public/schedule/detail/61403
New technologies seem to arrive fast and furious these days. We were just getting used to our new container world when serverless arrived. But is it better, faster, and cheaper, as the hype suggests?
Daniel Krook explores a real application packaged using popular open source container technology and walks you through a migration to an event-oriented serverless paradigm, discussing the trade-offs and pros and cons of each approach to application deployment and examining when serverless benefit applications and when it doesn’t.
You’ll learn considerations for using serverless API frameworks and how to reuse some of your containerization strategy as you move from more traditional application models to an event-driven world.
Daniel Krook, Software Architect, IBM
The document discusses Daniel Ramos and his work with Kubernetes. It provides information on Daniel Ramos' background as a full stack developer and hummus lover who works for Acid Tango, a digital product studio based in Madrid and Tenerife. The document then discusses various Kubernetes concepts like pods, deployments, services, ingress controllers, and Helm. It also mentions some other related tools like Docker, GraphQL, gRPC, Zipkin, Istio, ELK, and Scryer.
This document discusses how to setup a telco in the cloud using open source technologies. It describes how the company X by Orange uses infrastructure as code practices like Git, Packer, Terraform, and Ansible to provision their cloud infrastructure immutably. They deploy applications as containers using OpenShift and monitor services with Prometheus and Netdata. The goal is to provide flexible online solutions to customers faster than traditional telcos by embracing a cloud native approach.
Building and Running Workloads the Knative WayQAware GmbH
Serverless Computing 2019, November 2019, London: Talk by Mario-Leander Reimer (@LeanderReimer, Principal Software Architect at QAware)
=== Please download slides if blurred! ===
Abstract: Knative is a K8s based platform to build, deploy, manage and run serverless workloads.
In this session we will take a look at the concepts of each Knative building block and apply them directly in practice. First, we’ll define and use Tekton pipelines to build our workloads. Then we’ll use Knative serving to rapidly deploy serverless containers with automatic scaling up and down to zero. Finally, we’ll show how to build loosely coupled event-driven architectures with the help of Knative eventing. This session will also cover the different installation options leveraging either Istio or the API gateways Gloo and Ambassador.
ContainerConf 2019, November 2019, Mannheim: Vortrag von Mario-Leander Reimer (@LeanderReimer, Cheftechnologe bei QAware)
== Dokument bitte herunterladen, falls unscharf! ==
Abstract:
Vor nicht allzu langer Zeit haben Microservice-Architekturen die Art und Weise, wie wir Softwaresysteme bauen, revolutioniert: Anstatt als Monolithen werden Systeme nun in Form autonomer Services komponiert und ausgeführt.
Serverless und FaaS sind die nächste logische Stufe in dieser Evolution, um die Komplexität in der Entwicklung und im Betrieb solcher Systeme zu reduzieren.
FaaS-Plattformen schießen derzeit wie Pilze aus dem Boden: Knative, OpenFaaS, Fission oder Nuclio sind nur einige Beispiele. Aber welche davon sind bereits geeignet für den Einsatz im nächsten Projekt? Lassen sich damit hybride Architekturen umsetzen oder muss es vollständig Functionless sein? Lasst es uns herausfinden.
This document discusses containerization and orchestration on Microsoft Azure. It provides an overview of moving traditional applications to modern applications using microservices and containers. It then discusses what containers are and how to develop Kubernetes applications. Finally, it outlines how Azure Kubernetes Service simplifies deploying and managing Kubernetes and allows running both Windows and Linux containers in the same cluster.
PaaS is dead, Long live PaaS - Defrag 2016brendandburns
This document discusses the evolution of platform as a service (PaaS) over time. It argues that PaaS is no longer defined by proprietary platforms, as open source tools now allow developers to build their own PaaS solutions using containers and orchestration. The future of PaaS involves a more distributed model where specialized PaaS are built for different domains, and businesses may pay for support, usage-based pricing, or fully-managed services rather than proprietary platforms. Overall, the document suggests that PaaS as a concept is still relevant even if the traditional definition is changing.
For a lot of companies it is a challenge to automate their development pipeline. We would like to talk about one possible solution based on gitlab and terraform. The infrastructure and development process is created around git repositories. With Terraform it is possible to code also parts of the infrastructure. So each change in the application and also in the infrastrucure can be tracked within git repositorities. This is a great effort also for the CI process. So it is possible to automate the whole testing and integration processes very easy.
The document discusses serverless computing and Apache OpenWhisk. It describes how OpenWhisk allows developers to focus on business logic rather than infrastructure by executing code in response to events in a serverless manner. OpenWhisk provides a programming model where developers can create actions to handle triggers via rules. A number of demos are presented showing how to create triggers, actions and rules with OpenWhisk to handle events and build REST APIs.
This document discusses moving two customer-facing applications, Haufe Instant Feedback and Haufe Agile Hats, from self-hosted to cloud-native architectures on AWS. It provides an overview of the architectures, which include separating the applications by product at the VPC level and using AWS Fargate for container orchestration without Kubernetes. The document outlines the security measures taken and continuous integration/delivery pipeline used to deploy updates from development to production environments on AWS.
This document discusses using GitLab CI/CD to provision and manage infrastructure with Terraform Cloud (TFC). It begins with an agenda that includes an introduction to Terraform and TFC, integrating them with GitLab, and demos of using GitLab CI/CD pipelines with TFC for infrastructure as code. It then provides bios of two presenters and discusses how GitLab offers a single platform to plan, code, test, secure and release applications. The document concludes by pointing to additional resources on using GitLab CI with Terraform.
Goodbye CLI, hello API: Leveraging network programmability in security incid...Joel W. King
Automation and Orchestration has been the purview of cloud computing and system administration, but now is increasingly important to security operations and network administration. By automating the data collection and corrective action component of incident response, significant time savings can be realized. Corrective actions often need be applied to multiple assets in the organization and automation improves consistency and time savings as well. This talk describes how security and IT orchestration can be integrated through code reuse and integration with APIs.
We demonstrate how Phantom and Ansible can be integrated to automate the incident response data collection, corrective action, and notification.
Building serverless applications with Apache OpenWhiskDaniel Krook
IBM presentation at the O'Reilly Open Source Convention in Austin, Texas on May 10, 2017.
https://conferences.oreilly.com/oscon/oscon-tx/public/schedule/detail/61295
Apache OpenWhisk on IBM Bluemix provides a powerful and flexible environment for deploying cloud-native applications driven by data, message, and API call events. Daniel Krook explains why serverless architectures are attractive for many emerging cloud workloads and when you should consider OpenWhisk for your next project. Daniel then shows you how to get started with OpenWhisk on Bluemix right away, using several samples on GitHub.
Daniel Krook, Software Architect, IBM
Modern HTTP routing with skipper adds visibility, deployment patterns blue-green, shadow traffic or simple A/B tests to your toolchain. It runs with different dataclients to pull route information from a source, for example Kubernetes ingress objects.
Discuss the basics of the AWS CDK with its pros and cons. Including how the Cloud Development Kit (CDK) helped overcome the challenges faced in their previous serverless IaC solution.
Github repo for the PoC Source Code: https://github.com/dtl-open/cdkpoc
Cloud-native .NET Microservices mit KubernetesQAware GmbH
Mario-Leander Reimer presented on building cloud-native .NET microservices with Kubernetes. He discussed key principles of cloud native applications including designing for distribution, performance, automation, resiliency and elasticity. He also covered containerization with Docker, composing services with Kubernetes and common concepts like deployments, services and probes. Reimer provided examples of Dockerfiles, Kubernetes definitions and using tools like Steeltoe and docker-compose to develop cloud native applications.
ITGM#14 - How do we use Kubernetes in ZalandoUri Savelchev
This document discusses Zalando's use of Kubernetes in their technology infrastructure. It describes their motivation for adopting Kubernetes which included improved resource efficiency, cost efficiency, velocity and compliance. It provides an overview of their Kubernetes architecture which includes fully automated operations with no manual cluster management, single production clusters per product, control plane components running separately from nodes, and multi-AZ clusters. It also discusses how their continuous delivery platform integrates with Kubernetes to deploy applications and with AWS for infrastructure provisioning.
Large Scale Kubernetes on AWS at Europe's Leading Online Fashion Platform - A...Henning Jacobs
Bootstrapping a Kubernetes cluster is easy, rolling it out to nearly 200 engineering teams and operating it at scale is a challenge.
In this talk, we are presenting our approach to Kubernetes provisioning on AWS, operations and developer experience for our growing Zalando Technology department. We will highlight in the context of Kubernetes: AWS service integrations, our IAM/OAuth infrastructure, cluster autoscaling, continuous delivery and general developer experience. The talk will cover our most important learnings and we will openly share failure stories.
Presented on 2017-09-28 at AWS Tech Community Days in Cologne.
This document summarizes Michael Duergner's presentation on Kubernetes and AWS at Zalando. Some key points:
1. Zalando is a large European fashion retailer that uses Kubernetes on AWS to run its infrastructure across 15 markets and 6 fulfillment centers.
2. Zalando developed its own solutions like the Kubernetes ingress controller for AWS and PostgreSQL operator to run its workloads on Kubernetes in a scalable and reliable way across 30 production clusters.
3. Some challenges of using Kubernetes at large scale include stability during cluster updates, ease of onboarding over 200 teams, and providing a good user experience when combining Kubernetes and AWS configurations.
Introduction to the Container Networking and SecurityCloud 66
This document introduces container networking and some of the challenges it poses. It discusses how Docker's default bridge networking works but has limitations around port constraints and lack of "real IP networking". Overlay networks are presented as an alternative but have drawbacks around state, isolation between networks, and requiring developers to be networking experts. Project Calico is then introduced as an open source project that aims to enable scalable, simple and secure IP networking for containers through features like equal cost multi-path routing and rich micro-service policy frameworks.
This document summarizes the education and experience of Alen Badel. He is currently pursuing a Bachelor of Engineering Science in Computer Engineering with Professional Internship from Western University, specializing in Electrical Engineering and Computer Science. His experience includes internships at Ciena Corporation and Western University's Implantable Systems Laboratory, where he worked on projects related to network infrastructure, embedded Linux systems, and CMOS IC design. Currently, he is working on an ONOS & Open Cloud project and a capstone project designing a cloud gateway management system.
Safer Commutes & Streaming Data | George Padavick, Ohio Department of Transpo...HostedbyConfluent
The Ohio Department of Transportation has adopted Confluent as the event driven enabler of DriveOhio, a modern Intelligent Transportation System. DriveOhio digitally links sensors, cameras, speed monitoring equipment, and smart highway assets in real time, to dynamically adjust the surface road network to maximize the safety and efficiency for travelers. Over the past 24 months the team has increased the number and types of devices within the DriveOhio environment, while also working to see their vendors adopt Kafka to better participate in data sharing.
This document provides an overview of cloud native applications and the cloud native stack. It discusses key concepts like microservices, containerization, composition using Docker and Docker Compose, and orchestration using Kubernetes. It provides examples of building a simple microservices application with these technologies and deploying it on Kubernetes. Overall it serves as a guide to developing and deploying cloud native applications.
A hitchhiker‘s guide to the cloud native stackQAware GmbH
Container Days 2017, Hamburg: Vortrag von Mario-Leander Reimer (@LeanderReimer, Cheftechnologe bei QAware).
Abstract: Cloud-Größen wie Google, Twitter und Netflix haben die Kernbausteine ihrer Infrastruktur quelloffen verfügbar gemacht. Das Resultat aus vielen Jahren Cloud-Erfahrung ist nun frei zugänglich, und jeder kann seine eigenen Cloud-nativen Anwendungen entwickeln – Anwendungen, die in der Cloud zuverlässig laufen und fast beliebig skalieren. Die einzelnen Bausteine wachsen zu einem großen Ganzen zusammen, dem Cloud Native Stack.
In dieser Session stellen wir die wichtigsten Konzepte und Schlüsseltechnologien vor und bringen dann eine Spring-Cloud-basierte Beispielanwendung schrittweise auf Kubernetes und DC/OS zum Laufen. Dabei diskutieren wir verschiedene praktikable Architekturalternativen.
How we built Packet's bare metal cloud platformPacket
Overview on Packet's approach to bare metal server and network automation for our public cloud. Presented at the Downtech NY Tech meetup on May 19th, 2016
The Developer's Journey through IBM Cloud Pak for ApplicationsMiroslav Resetar
This document outlines the steps a developer would take to use IBM Cloud Pak for Applications. It discusses installing OpenShift, installing the IBM Cloud CLI, creating an OpenShift cluster, loading IBM Cloud Pak images, installing common services, and installing Cloud Pak for Applications. It also briefly introduces tools like Kabanero and Appsody that developers can use to build cloud-native applications on Cloud Pak for Applications.
Cloud Native Applications on OpenShiftSerhat Dirik
This document discusses cloud native development and DevOps using OpenShift Container Platform. It begins by defining cloud native as involving both application architecture and the development, deployment and management processes used. It then discusses how containers evolve application delivery and how container platforms are part of the DevOps tool kit. The document outlines the path to DevOps, emphasizing culture, automation and using the right platform. It also notes that DevOps and containers often go hand in hand, with many DevOps adopters using containers. The document then discusses various capabilities of OpenShift and how it supports cloud native development.
VMware & Pivotal’s Pivotal Container Service (PKS) is a container management platform that provides a Kubernetes container orchestration service. PKS runs Kubernetes clusters on vSphere and VMware Cloud Foundation. It provides high availability, security and multi-tenancy capabilities. PKS integrates deeply with NSX for network and security services.
Discover the benefits of Kubernetes to host a SaaS solutionScaleway
What you can take away from this presentation:
- What a SaaS solution is
- Key figures on the SaaS market
- Advantages of Kubernetes Kapsule for SaaS
- How to optimize your costs and loads while maintaining stability
- How to guarantee the security of your infrastructures
- The difference between a multi-instance and a multi-tenant architecture
This presentation explains what serverless is all about, explaining the context from Devs & Ops points of view, and presenting the various ways to achieve serverless (Functions a as Service, BaaS....). It also presents the various competitors on the market and demo one of them, openfaas. Finally, it enlarges the pictures, positionning serverless, combined with Edge computing & IoT, as a valuable triptic cloud vendors are leveraging on top of, to create end-to-end offers.
Docker, cornerstone of an hybrid cloud?Adrien Blind
In this presentation, I propose to explore the orchestration & hybridation potential raised by Docker 1.12 Swarm Mode and the subsequent benefits.
I'll first remind why docker fits well the microservices paradigms, and how does this architecture engender new challenges : service discovery, app-centric security, scalability & resilience, and of course, orchestration.
I'll then discuss the opportunity to create your own docker CaaS platform hybridating simultaneously on various cloud vendors & traditional datacenters, better than just leveraging on vendors integrated offers.
Finally, I'll discuss the rise of new technologies (Windows containers, ARM architectures) in the docker landscape, and the opportunity of integrating them in a global docker composite orchestration, enabling to depict globally complex apps.
Similar to 2018 04-06 kubernetes ingress in production (20)
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
2. 2
WE ARE CONSTANTLY INNOVATING TECHNOLOGY
HOME-BREWED,
CUTTING-EDGE
& SCALABLE
technology solutions
~ 2,000
employees from
tech locations
(HQs in Berlin)7
77
nations
help our brand to
WIN ONLINE
Welcome to DecompileD!
Today, my talk is about - Kubernetes Ingress in production.
To have some context I will show you some Zalando numbers.
There are about 2000 employees working for Zalando Tech.
We have 7 tech hubs in Europe.
My customers are all developer teams and we need to scale!
Let's have a brief look into zalando’s technology stack to have more technical context
We started as PHP magento shop.
We rewrote it with Java and Postgres and deployed it into Linux containers.
With a management shift we went to the AWS cloud and now evolve into a state of the art kubernetes infrastructure.
We use Docker as deploy artifacts and Kubernetes to orchestrate them.
What is meant by large scale?
Let’s have a very brief look into Kubernetes objects relevant to the talk
A Deployment creates a set of Pods.
<wait>
A Kubernetes service selects a set of PODs and acts as TCP loadbalancer to them
<wait>
An ingress is an external access point to services
<wait>
Because we have about 300 teams that want to deploy,
we need automations that build loadbalancer infrastructure.
We do this based on the Ingress definition.
Let’s see what we want to build and how we do it.
There are 2 loadbalancer components involved: The application loadbalancer ALB, and skipper.
You see the blue boxes.<wait>
Request processing is going from top to bottom:
First TLS is terminated on the ALB
Skipper is target of all ALBs. Skipper runs on every worker node and does http routing.
Skipper selects MyApp PODs via Kubernetes service
MyApp boxes are your application PODs.
Technically, skipper bypasses Kubernetes service to reach PODs directly.
Like this we can do proper loadbalancing and do retries on failing connections.
<wait>
An ingress object is glueing the blue loadbalancer together with the green backends.
You see two marked definitions:
host is the host header for the frontend http routing
And backend is used to find the application
If we created this ingress object,
Skipper creates an HTTP route based on the provided configuration.<wait>
From cluster nodes we can call a skipper endpoint with the specified Host header to reach our application.
<wait>
Kube-ingress-aws-controller creates an ALB with attached certificates pointing to skipper.
With this inplace, you can create an HTTPS request to an ALB.<wait>
The ALB target shown is a route53 ALIAS record.<wait>
With the correct host header set, a request will reach your application.<wait>
External DNS creates a public DNS record to the ALB.<wait>
Now, we have everything we need to serve public traffic from the internet.<wait>
Everything is automated and a deployer has only to provide an ingress definition.
To understand highlevel deployment patterns, I will give you a brief introduction to skipper.<wait>
Skipper is a flexible cloud native http proxy router.
It is made for frequently changing configurations.<wait>
Additionally, skipper has 2 building blocks seen by users: Predicates and Filters
Skipper has a routing table proven to scale beyond 200.000 routes.<wait>
A routing table consists of a number of routes.<wait>
An http request will be mapped by Predicates to a specific Route.<wait>
Each route has a set of filters.<wait>
HTTP requests and responses can be changed by Filters. <wait>
For example we can change the path of the request from /api to / <wait>, which we might add in the response again.<wait>
We can also set a Cookie in the response.
<wait>
Predicates and Filters can both be set by Ingress annotations: <wait>
skipper-predicate and skipper-filter <wait>
You now have an understanding of required details for the next sections.
Besides the Kubernetes rolling update strategy, skipper supports <wait>
Shadow traffic <wait> and blue-green deployments.<wait>
Let’s see why..
A common development cycle looks like this.<wait>
We develop and test and if these are successful.<wait>
We deploy and go production.<wait>
We do this all the day. <wait> If not we drink coffee and attend meetings.<wait>
In real world we see failures after new deployments, <wait>
because the newer version might be slower than before.
One solution is to target your new application with current life traffic.<wait>
Shadow traffic allows you to test with live traffic without notice of your users.<wait>
Skipper can copy the request to a new target and drop the response from the new one.<wait>
This we call shadow traffic<wait>
You can use the tee() filter to copy the full request to another URL target.
This gives you flexibility however your new service is structured.
Another solution is to use blue-green deployments.
Skipper can split traffic to different Kubernetes services.
Like this you can rollout a version v2 and slowly ramp up traffic.
How do you do it?
Again using ingress!
You see the backend-weights annotation set to 90 and 10 for the
2 service backends for hostname “my-app.example.org”.
Skipper will split the traffic as you defined in ingress.
Here 90% of the traffic will target my-app-v1 and 10% v2.
As user interface you can use a kube control plugin to do traffic switching from v1 to v2.
The last argument is the percentage of traffic you want to direct to the new service.
The old one will get the rest of the traffic.
How do you downgrade a feature or test that a new feature is a success?
Feature toggle and A/B tests can do that and skipper can help you to implement these.
A feature toggle can be easily downgraded on failure by your caller.
If v equals alpha does not reply in time, call next time without this query.
The caller can decide, if the feature is enabled or not.
To implement a feature toggle, you create an additional ingress.
If a request matches the query “v equals alpha” and the host header,
skipper will proxy to alpha service.
To check if implementation A is better than B, you can use A/B tests.
<wait>
A request without cookie matching our target has 10% chance to get a Cookie with “flavor equals A”.
The rest will get “flavor equals B”
We see the traffic predicate matches the route by 10% chance.
And skipper sets a Cookie with flavor A in the response.
<wait>
Rest will get a cookie with flavor B
<wait>
A request with cookie “flavor equals A” will be forwarded to service A.
The same applies for B.
Clients will stick to the chosen backend from part 1.
In case of a Cookie with flavor A, we call the backend a-app-svc
<wait>
In case of of Cookie with flavor B, we call the backend b-app-svc
<wait>
To run applications in production, you need to have visibility
How do you get all logs from one request across all backends?
This what X-FlowID is for. Skipper sets an X-FlowID header, if not passed in the request.
Applications only have to log this header in their handlers.
To find a log trace you can grep for the FlowID in this case: capital A
To answer the question if your backend application is slow, or returns errors,
you want to have metrics from your loadbalancer
Skipper measures and exposes roundtrip metrics, errors, counters and histograms. We export metrics as json or Prometheus format.
To find which part of a service is slow, you should setup opentracing.
This enables you to get waterfall charts to boil down which service in the chain is slow
Skipper can add automatically tracing headers to all incoming requests and reports to agents.
This allows you to see skipper in traces shown before.
For resiliency, we have ratelimits and automatic retries.
Additionally we also have circuitbreaker and you can also add throttling or packetloss,
but I will not show this today.
Ratelimits can be used to protect your backends.
You see incoming 1k requests per second and only 300 will be forwarded,
rest will get a HTTP code 429
To allow 100 requests per second to the defined backend,
we setup cluster Ratelimits as skipper Filter
Client side ratelimits can be used to protect your login page.
For example allow 10 requests per hour
rest will get a HTTP code 429
The shown cluster ratelimit filter with a third parameter allows 10 requests per hour per X-Forwarded-For header to the defined backend
Skipper can do retries.
For example the first request goes to a POD which is ..
.. failing, so skipper will get a connect refused from the backend.
Skipper will do ..
.. a retry to the other available POD.
This is safe to do, because we only retry on errors if we did not send data.
That was it for today.
We would like to hear from you in github issues or our skipper google group!
We are also available in K8s Slack #sig-aws and #external-dns or ping me at twitter.
Questions?
<<prev slide to Show the links>>