The attached is a summary of terms, description of constructs, integration alternatives and more in the networking world of Kubernetes, Openshift and AWS
VM과 컨테이너 상에서 Apache & Tomcat 설치, 실행 그리고 배포 데모
데모 요약 : 수작업으로 진행하는 가상화환경과 OCP 환경(Dockerfile)의
간단한 apache / tomcat 설치 및 실행하는 비교 데모 입니다.
데모 내용 : 물리서버 또는 가상화 환경에서 수작업으로 진행했던 배포 작업들이 컨테이너 환경에서 원클릭으로 배포하는 영상을 보여주는 데모입니다. 컨테이너 환경에서의
배포는 수작업으로 하는 배포 대비 상상이상의 시간을 아낄 수 있습니다.
오픈나루 데모 URL : http://www.opennaru.com/seminar/%ed%81%b4%eb%9d%bc%ec%9a%b0%eb%93%9c-%eb%84%a4%ec%9d%b4%ed%8b%b0%eb%b8%8c-%eb%8d%b0%eb%aa%a8-%ec%9c%a0%ed%8a%9c%eb%b8%8c/
오픈나루 비대면 워크샵 URL : http://www.opennaru.com/seminar/%ed%81%b4%eb%9d%bc%ec%9a%b0%eb%93%9c-%eb%84%a4%ec%9d%b4%ed%8b%b0%eb%b8%8c-%ec%9b%8c%ed%81%ac%ec%83%b5/
Installing and Using Kubernetes is hard, but Operating Kubernetes is even harder! This BOF is for Kubernetes Operators to get together and discuss our day to day Operations, and for people new to Kubernetes to learn more about how to operate it.
An in depth overview of Kubernetes and it's various components.
NOTE: This is a fixed version of a previous presentation (a draft was uploaded with some errors)
VM과 컨테이너 상에서 Apache & Tomcat 설치, 실행 그리고 배포 데모
데모 요약 : 수작업으로 진행하는 가상화환경과 OCP 환경(Dockerfile)의
간단한 apache / tomcat 설치 및 실행하는 비교 데모 입니다.
데모 내용 : 물리서버 또는 가상화 환경에서 수작업으로 진행했던 배포 작업들이 컨테이너 환경에서 원클릭으로 배포하는 영상을 보여주는 데모입니다. 컨테이너 환경에서의
배포는 수작업으로 하는 배포 대비 상상이상의 시간을 아낄 수 있습니다.
오픈나루 데모 URL : http://www.opennaru.com/seminar/%ed%81%b4%eb%9d%bc%ec%9a%b0%eb%93%9c-%eb%84%a4%ec%9d%b4%ed%8b%b0%eb%b8%8c-%eb%8d%b0%eb%aa%a8-%ec%9c%a0%ed%8a%9c%eb%b8%8c/
오픈나루 비대면 워크샵 URL : http://www.opennaru.com/seminar/%ed%81%b4%eb%9d%bc%ec%9a%b0%eb%93%9c-%eb%84%a4%ec%9d%b4%ed%8b%b0%eb%b8%8c-%ec%9b%8c%ed%81%ac%ec%83%b5/
Installing and Using Kubernetes is hard, but Operating Kubernetes is even harder! This BOF is for Kubernetes Operators to get together and discuss our day to day Operations, and for people new to Kubernetes to learn more about how to operate it.
An in depth overview of Kubernetes and it's various components.
NOTE: This is a fixed version of a previous presentation (a draft was uploaded with some errors)
source : http://www.opennaru.com/cloud/msa/
마이크로서비스는 애플리케이션 구축을 위한 아키텍처 기반의 접근 방식입니다. 마이크로서비스를 전통적인 모놀리식(monolithic) 접근 방식과 구별 짓는 기준은 애플리케이션을 핵심 기능으로 세분화하는 방식입니다. 각 기능을 서비스라고 부르며, 독립적으로 구축하고 배포할 수 있습니다. 이는 개별 서비스가 다른 서비스에 부정적 영향을 주지 않으면서 작동(또는 장애가 발생)할 수 있음을 의미합니다.
[오픈소스컨설팅]클라우드기반U2L마이그레이션 전략 및 고려사항Ji-Woong Choi
Cloud 기반으로 U2C(Unix to Cloud),U2L(Unix to Linux) 마이그레이션에 대한 가이드 라인과 사이징 관련 고려 사항에 대해 설명한 자료입니다.
많은 전환 프로젝트에서 추출된 경험치가 들어가 있으며, 전환별 난이도 및 고려사항이 들어가 있습니다.
Nginx pronounced as "Engine X" is an open source high performance web and reverse proxy server which supports protocols like HTTP, HTTPS, SMTP, IMAP. It can also be used for load balancing and HTTP caching.
클라우드 네이티브로의 전환이 확산되면서 애플리케이션을 상호 독립적인 최소 구성 요소로 쪼개는 마이크로서비스(microservices) 아키텍쳐가 각광받고 있는데요.
MSA는 애플리케이션의 확장이 쉽고 새로운 기능의 출시 기간을 단축시킬 수 있다는 장점이 있지만,
반면에 애플리케이션이 커지고 동일한 서비스의 여러 인스턴스가 동시에 실행되면 MSA간 통신이 복잡해 진다는 단점이 있습니다.
서비스 메쉬(Service Mesh)는 이러한 MSA의 트래픽 문제를 보완하기 위해 탄생한 기술로,
서비스 간의 네트워크 트래픽 관리에 초점을 맞춘 네트워킹 모델입니다.
서로 다른 애플리케이션이 얼마나 원활하게 상호작용하는지를 기록함으로써 커뮤니케이션을 최적화하고 애플리케이션 확장에 따른 다운 타임을 방지할 수 있습니다.
서비스 메쉬의 탄생 배경과 기능, 그리고 현재 오픈소스로 배포되어 있는 서비스 메쉬 솔루션에 대해 소개합니다.
Step1. Cloud Native Trail Map
Step2. Service Proxy, Discover, & Mesh
Step3. Service Mesh 솔루션
Step4. Service Mesh 구현화면 - Istio / linkerd
Step5. Multi-cluster (linkerd)
Shift Deployment Security Left with Weave GitOps & Upbound’s Universal Crossp...Weaveworks
In this session, we’ve partnered with Upbound to showcase how to effectively manage application delivery while maintaining a high level of security using Weave GitOps and Upbound. Managing a stateful application deployment with a relational database, Weave GitOps can recognize if there is a policy violation and correct it before deploying the application.
Join us as we demonstrate the scenarios where:
All changes to application configuration are managed through Git workflows
Upbound’s Universal Crossplane allows you to build, deploy, and manage your cloud platforms
GitOps provides an extra layer of security by removing the need for direct access to Kubernetes clusters
Policy-as-Code guarantees security, resilience and coding standards compliance
Watch the recording: xx
Building PaaS with Amazon EKS for the Large-Scale, Highly Regulated Enterpris...Amazon Web Services
Containers make it easy to build and deploy applications by abstracting away the underlying operating system. But how do you build secure and compliant containerized applications in a distributed environment, and without direct access to the operating system your code is running on? In this session, hear how Amazon Elastic Container Service for Kubernetes (Amazon EKS) is integrated into a large-scale regulated enterprise in the areas of network, security, CI/CD, and monitoring to cater to the needs of various business units. We cover the basics in each of these areas in Amazon EKS, and we hear from Fidelity on how it is driving its cloud strategy with Amazon EKS in the heavily regulated finance sector. We also share best practices and common architectures for building containerized application in highly regulated industries.
This presentation shows you the basic concept of distributed tracing and Opentracing. And you can see the sample hands-on application (HotROD) of Jaeger
In the Cloud Native community, eBPF is gaining popularity, which can often be the best solution for solving different challenges with deep observability of system. Currently, eBPF is being embraced by major players.
Mydbops co-Founder, Kabilesh P.R (MySQL and Mongo Consultant) illustrates on debugging linux issues with eBPF. A brief about BPF & eBPF, BPF internals and the tools in actions for faster resolution.
Kubernetes와 Kubernetes on OpenStack 환경의 비교와 그 구축방법에 대해서 알아봅니다.
1. 클라우드 동향
2. Kubernetes vs Kubernetes on OpenStack
3. Kubernetes on OpenStack 구축 방벙
4. Kubernetes on OpenStack 운영 방법
Attendees will learn how to leverage the identity and authorisation, network security and secrets management features of the wider AWS platform for their containers, including Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Container Service for Kubernetes (Amazon EKS). We also discuss best practices for the security of your container images such as scanning them for known vulnerabilities.
PUBG: Battlegrounds 라이브 서비스 EKS 전환 사례 공유 [크래프톤 - 레벨 300] - 발표자: 김정헌, PUBG Dev...Amazon Web Services Korea
PUBG: Battlegrounds를 위한 게임 관련 인프라를 EKS 기반 환경으로 모두 전환한 경험에 대해 공유해 드릴 예정입니다. PUBG의 글로벌 서비스를 위한 인프라 구성에 대해 간단히 소개하고, 라이브 서비스 중인 인프라를 EC2 기반에서 EKS 기반으로 점진적으로 전환하면서 겪었던 문제들과 소중한 경험들을 사례를 통해 전달해드립니다.
What Is Ansible? | How Ansible Works? | Ansible Tutorial For Beginners | DevO...Simplilearn
This presentation on Ansible will help you understand why Ansible is needed, what is Ansible, Ansible as a pull configuration tool, Ansible architecture, Ansible playbook, Ansible inventory, how Ansible works, Ansible tower and you will also see a use case on how Hootsuite used Ansible. Increasing team productivity and improving business outcomes have now become easy with Ansible. Ansible is a simple, popular, agent-free tool in the automation domain. Ansible is a tool that allows you to create and control three key areas within the operations environment of software development lifecycle. The first one is IT automation which allows you to write instructions to automate the IT professional's work that you would typically do manually in the past, the second is configuration management which allows you to maintain consistency of all systems in the infrastructure and the third is automatic deployment which allows you to deploy applications automatically on a variety of environments. Now let us get started and understand Ansible and it's architecture.
Below topics are explained in this Ansible presentation:
1. Why Ansible?
2. What is Ansible?
3. Ansible - Pull configuration tool
4. Ansible architecture
5. Playbook
6. Inventory
7. Working of Ansible
8. Ansible tower
9. Use case by Hootsuite
Simplilearn's DevOps Certification Training Course will prepare you for a career in DevOps, the fast-growing field that bridges the gap between software developers and operations. You’ll become en expert in the principles of continuous development and deployment, automation of configuration management, inter-team collaboration and IT service agility, using modern DevOps tools such as Git, Docker, Jenkins, Puppet and Nagios. DevOps jobs are highly paid and in great demand, so start on your path today.
Who should take this course?
DevOps career opportunities are thriving worldwide. DevOps was featured as one of the 11 best jobs in America for 2017, according to CBS News, and data from Payscale.com shows that DevOps Managers earn as much as $122,234 per year, with DevOps engineers making as much as $151,461. DevOps jobs are the third-highest tech role ranked by employer demand on Indeed.com but have the second-highest talent deficit.
1. This DevOps training course will be of benefit the following professional roles:
2. Software Developers
3. Technical Project Managers
4. Architects
5. Operations Support
6. Deployment engineers
7. IT managers
8. Development managers
Learn more at: https://www.simplilearn.com/
source : http://www.opennaru.com/cloud/msa/
마이크로서비스는 애플리케이션 구축을 위한 아키텍처 기반의 접근 방식입니다. 마이크로서비스를 전통적인 모놀리식(monolithic) 접근 방식과 구별 짓는 기준은 애플리케이션을 핵심 기능으로 세분화하는 방식입니다. 각 기능을 서비스라고 부르며, 독립적으로 구축하고 배포할 수 있습니다. 이는 개별 서비스가 다른 서비스에 부정적 영향을 주지 않으면서 작동(또는 장애가 발생)할 수 있음을 의미합니다.
[오픈소스컨설팅]클라우드기반U2L마이그레이션 전략 및 고려사항Ji-Woong Choi
Cloud 기반으로 U2C(Unix to Cloud),U2L(Unix to Linux) 마이그레이션에 대한 가이드 라인과 사이징 관련 고려 사항에 대해 설명한 자료입니다.
많은 전환 프로젝트에서 추출된 경험치가 들어가 있으며, 전환별 난이도 및 고려사항이 들어가 있습니다.
Nginx pronounced as "Engine X" is an open source high performance web and reverse proxy server which supports protocols like HTTP, HTTPS, SMTP, IMAP. It can also be used for load balancing and HTTP caching.
클라우드 네이티브로의 전환이 확산되면서 애플리케이션을 상호 독립적인 최소 구성 요소로 쪼개는 마이크로서비스(microservices) 아키텍쳐가 각광받고 있는데요.
MSA는 애플리케이션의 확장이 쉽고 새로운 기능의 출시 기간을 단축시킬 수 있다는 장점이 있지만,
반면에 애플리케이션이 커지고 동일한 서비스의 여러 인스턴스가 동시에 실행되면 MSA간 통신이 복잡해 진다는 단점이 있습니다.
서비스 메쉬(Service Mesh)는 이러한 MSA의 트래픽 문제를 보완하기 위해 탄생한 기술로,
서비스 간의 네트워크 트래픽 관리에 초점을 맞춘 네트워킹 모델입니다.
서로 다른 애플리케이션이 얼마나 원활하게 상호작용하는지를 기록함으로써 커뮤니케이션을 최적화하고 애플리케이션 확장에 따른 다운 타임을 방지할 수 있습니다.
서비스 메쉬의 탄생 배경과 기능, 그리고 현재 오픈소스로 배포되어 있는 서비스 메쉬 솔루션에 대해 소개합니다.
Step1. Cloud Native Trail Map
Step2. Service Proxy, Discover, & Mesh
Step3. Service Mesh 솔루션
Step4. Service Mesh 구현화면 - Istio / linkerd
Step5. Multi-cluster (linkerd)
Shift Deployment Security Left with Weave GitOps & Upbound’s Universal Crossp...Weaveworks
In this session, we’ve partnered with Upbound to showcase how to effectively manage application delivery while maintaining a high level of security using Weave GitOps and Upbound. Managing a stateful application deployment with a relational database, Weave GitOps can recognize if there is a policy violation and correct it before deploying the application.
Join us as we demonstrate the scenarios where:
All changes to application configuration are managed through Git workflows
Upbound’s Universal Crossplane allows you to build, deploy, and manage your cloud platforms
GitOps provides an extra layer of security by removing the need for direct access to Kubernetes clusters
Policy-as-Code guarantees security, resilience and coding standards compliance
Watch the recording: xx
Building PaaS with Amazon EKS for the Large-Scale, Highly Regulated Enterpris...Amazon Web Services
Containers make it easy to build and deploy applications by abstracting away the underlying operating system. But how do you build secure and compliant containerized applications in a distributed environment, and without direct access to the operating system your code is running on? In this session, hear how Amazon Elastic Container Service for Kubernetes (Amazon EKS) is integrated into a large-scale regulated enterprise in the areas of network, security, CI/CD, and monitoring to cater to the needs of various business units. We cover the basics in each of these areas in Amazon EKS, and we hear from Fidelity on how it is driving its cloud strategy with Amazon EKS in the heavily regulated finance sector. We also share best practices and common architectures for building containerized application in highly regulated industries.
This presentation shows you the basic concept of distributed tracing and Opentracing. And you can see the sample hands-on application (HotROD) of Jaeger
In the Cloud Native community, eBPF is gaining popularity, which can often be the best solution for solving different challenges with deep observability of system. Currently, eBPF is being embraced by major players.
Mydbops co-Founder, Kabilesh P.R (MySQL and Mongo Consultant) illustrates on debugging linux issues with eBPF. A brief about BPF & eBPF, BPF internals and the tools in actions for faster resolution.
Kubernetes와 Kubernetes on OpenStack 환경의 비교와 그 구축방법에 대해서 알아봅니다.
1. 클라우드 동향
2. Kubernetes vs Kubernetes on OpenStack
3. Kubernetes on OpenStack 구축 방벙
4. Kubernetes on OpenStack 운영 방법
Attendees will learn how to leverage the identity and authorisation, network security and secrets management features of the wider AWS platform for their containers, including Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Container Service for Kubernetes (Amazon EKS). We also discuss best practices for the security of your container images such as scanning them for known vulnerabilities.
PUBG: Battlegrounds 라이브 서비스 EKS 전환 사례 공유 [크래프톤 - 레벨 300] - 발표자: 김정헌, PUBG Dev...Amazon Web Services Korea
PUBG: Battlegrounds를 위한 게임 관련 인프라를 EKS 기반 환경으로 모두 전환한 경험에 대해 공유해 드릴 예정입니다. PUBG의 글로벌 서비스를 위한 인프라 구성에 대해 간단히 소개하고, 라이브 서비스 중인 인프라를 EC2 기반에서 EKS 기반으로 점진적으로 전환하면서 겪었던 문제들과 소중한 경험들을 사례를 통해 전달해드립니다.
What Is Ansible? | How Ansible Works? | Ansible Tutorial For Beginners | DevO...Simplilearn
This presentation on Ansible will help you understand why Ansible is needed, what is Ansible, Ansible as a pull configuration tool, Ansible architecture, Ansible playbook, Ansible inventory, how Ansible works, Ansible tower and you will also see a use case on how Hootsuite used Ansible. Increasing team productivity and improving business outcomes have now become easy with Ansible. Ansible is a simple, popular, agent-free tool in the automation domain. Ansible is a tool that allows you to create and control three key areas within the operations environment of software development lifecycle. The first one is IT automation which allows you to write instructions to automate the IT professional's work that you would typically do manually in the past, the second is configuration management which allows you to maintain consistency of all systems in the infrastructure and the third is automatic deployment which allows you to deploy applications automatically on a variety of environments. Now let us get started and understand Ansible and it's architecture.
Below topics are explained in this Ansible presentation:
1. Why Ansible?
2. What is Ansible?
3. Ansible - Pull configuration tool
4. Ansible architecture
5. Playbook
6. Inventory
7. Working of Ansible
8. Ansible tower
9. Use case by Hootsuite
Simplilearn's DevOps Certification Training Course will prepare you for a career in DevOps, the fast-growing field that bridges the gap between software developers and operations. You’ll become en expert in the principles of continuous development and deployment, automation of configuration management, inter-team collaboration and IT service agility, using modern DevOps tools such as Git, Docker, Jenkins, Puppet and Nagios. DevOps jobs are highly paid and in great demand, so start on your path today.
Who should take this course?
DevOps career opportunities are thriving worldwide. DevOps was featured as one of the 11 best jobs in America for 2017, according to CBS News, and data from Payscale.com shows that DevOps Managers earn as much as $122,234 per year, with DevOps engineers making as much as $151,461. DevOps jobs are the third-highest tech role ranked by employer demand on Indeed.com but have the second-highest talent deficit.
1. This DevOps training course will be of benefit the following professional roles:
2. Software Developers
3. Technical Project Managers
4. Architects
5. Operations Support
6. Deployment engineers
7. IT managers
8. Development managers
Learn more at: https://www.simplilearn.com/
This slide deck was presented on a Docker Meetup in Melbourne in March 2016. Linux namespaces and how they working together with Docker were covered in detail as an introduction to this presentation. In the main part was discussed solution that uses VXLAN networks together with EVPN BGP signalling to route traffic between Docker containers.
Kubernetes currently has two load balancing mode: userspace and IPTables. They both have limitation on scalability and performance. We introduced IPVS as third kube-proxy mode which scales kubernetes load balancer to support 50,000 services. Beyond that, control plane needs to be optimized in order to deploy 50,000 services. We will introduce alternative solutions and our prototypes with detailed performance data.
Intro to Project Calico: a pure layer 3 approach to scale-out networkingPacket
Slide presentation from the April 16th, 2015 Downtown NY Tech Meetup hosted at Control Group and presented by Christopher Liljenstolpe from Project Calico (www.projectcalico.org)
Project Calico is a scale-out networking fabric for bare metal, container, VM, and hybrid environments. Project Calico leverages the same networking techniques used to scale out the Internet to present a highly scaleable, L3 network for those environments without the use of tunnels, overlays, or other complex constructs. We'll also do a demo of a Calico enabled Docker environment, and have plenty of time for q&a during and after.
About Christopher Liljenstolpe
Christopher is the original architect of Project Calico and one of the project's evangelists. In his day job, he's the director of solutions architecture at Metaswitch Networks. Prior to Calico/Metaswitch, he's designed and run some bio-informatics OpenStack clusters, done some SDN architecture work at Big Switch Networks, Run architecture at two large carriers (Telstra - AS1221, and Cable & Wireless/iMCI - AS3561) and been the IP CTO for Alcatel in Asia. He's also run networks in Antarctica (hint, bend radius becomes REALLY important at -50C), and been foolish enough to do a stint as a wg co-chair in the IETF. Occasionally you can have the (mis-)fortune of hearing him speak at conferences and the like.
Webcast - Making kubernetes production readyApplatix
Slides from our techical webcast where Harry Zhang and Abhinav Das discuss the problems the Applatix engineering team ran into in building large-scale production apps on Kubernetes and our resulting solutions, tips, and settings to resolve them. Full youtube video of webcast at https://www.youtube.com/watch?v=tbD6Rcm2sI8&spfreload=5
Kubernetes Architecture - beyond a black box - Part 2Hao H. Zhang
This continues the Kubernetes architecture deep dive series. (Part 1 see https://www.slideshare.net/harryzhang735/kubernetes-beyond-a-black-box-part-1)
In Part 2 I'm going to cover the following:
- Kubernetes's 3 most import design choices: Micro-service Choreography, Level-Triggered Control, Generalized Workload and Centralized Controller
- Default scheduler limitation and community's next step
- Interface to production environment
- Workload abstraction: strength and limitations
This concludes my work and knowledge sharing about Kubernetes.
Kubernetes has been a key component for many companies to reduce technical debt in infrastructure by:
• Fostering the Adoption of Docker
• Simplifying Container Management
• Onboarding Developers On Infrastructure
• Unlocking Continuous Integration and Delivery
During this meetup we are going to discuss the following topics and share some best practices
• What's new with Kubernetes 1.3
• Generate Cluster Configuration using CloudFormation
• Deploy Kubernetes Clusters on AWS
• Scaling the Cluster
• Integrating Ingress with Elastic Load Balancer
• Using Internal ELB's as Kubernetes' Service
• Using EBS for persistent volumes
• Integrating Route53
Large Scale Kubernetes on AWS at Europe's Leading Online Fashion Platform - A...Henning Jacobs
Bootstrapping a Kubernetes cluster is easy, rolling it out to nearly 200 engineering teams and operating it at scale is a challenge.
In this talk, we are presenting our approach to Kubernetes provisioning on AWS, operations and developer experience for our growing Zalando Technology department. We will highlight in the context of Kubernetes: AWS service integrations, our IAM/OAuth infrastructure, cluster autoscaling, continuous delivery and general developer experience. The talk will cover our most important learnings and we will openly share failure stories.
Presented on 2017-09-28 at AWS Tech Community Days in Cologne.
An introduction to Kubernetes and a look at how it leverages AWS IaaS features to provide its own virtual clustering, and demonstration of some of the behaviour inside the cluster that makes Kubernetes a popular choice for microservice deployments.
Kubernetes Architecture - beyond a black box - Part 1Hao H. Zhang
This is part 1 of my Kubernetes architecture deep-dive slide series.
I have been working with Kubernetes for more than a year, from v1.3.6 to v1.6.7, and I am a CNCF certified Kubernetes administrator. Before I move on to something else, I would like to summarize and share my knowledges and take-aways about Kubernetes, from a software engineer perspective.
This set of slides is a humble dig into one level below your running application in production, revealing how different components of Kubernetes work together to orchestrate containers and present your applications to the rest of the world.
The slides contains 80+ external links to Kubernetes documentations, blog posts, Github issues, discussions, design proposals, pull requests, papers, source code files I went through when I was working with Kubernetes - which I think are valuable for people to understand how Kubernetes works, Kubernetes design philosophies and why these design came into places.
Kubernetes on AWS at Europe's Leading Online Fashion PlatformHenning Jacobs
Henning Jacobs is a Kubernetes on AWS Hacker at Zalando Tech. His talk briefly covers our learnings in Zalando Tech while running Kubernetes on AWS in production.
Topics include:
- Cluster provisioning,
- AWS integration,
- Ingress,
- Cluster autoscaling,
- OAuth/IAM and
- Operations/monitoring.
https://www.meetup.com/Zalando-Tech-Events-Berlin/events/238212872/
Beyond Ingresses - Better Traffic Management in KubernetesMark McBride
Kubernetes makes deploying code easy, but conflating deploys and releases is risky. Using smarter proxies you can dramatically reduce the risk of a release, which in turn helps you ship code to customers faster.
Azure Networking: Innovative Features and Multi-VNet TopologiesMarius Zaharia
Are you looking to deploy a more complex structure of resources in Azure, all secured and segregated by precise boundaries while closely communicating with each other? Following the arrival of the advanced IaaS networking features in Azure (network security groups, routing, multi-NIC, …) and their maturation in the last months, here is the moment for you to find a modern architectural vision of networking in Azure, with focus on multi-VNET / VPN topologies, and based on ARM deployment model.
A brief introduction to Amazon Virtual Private Cloud (VPC).
Amazon VPC is a very important service that provides a logically isolated area of the AWS cloud where you can launch AWS resources in a virtual network that you define.
High Availability Application Architectures in Amazon VPC (ARC202) | AWS re:I...Amazon Web Services
Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the Amazon Web Services (AWS) cloud where you can launch AWS resources in a virtual data center that you define. In this session you learn how to leverage the VPC networking constructs to configure a highly available and secure virtual data center on AWS for your application. We cover best practices around choosing an IP range for your VPC, creating subnets, configuring routing, securing your VPC, establishing VPN connectivity, and much more. The session culminates in creating a highly available web application stack inside of VPC and testing its availability with Chaos Monkey.
Open stack networking_101_update_2014-os-meetupsyfauser
This is the latest Update to my OpenStack Networking / Neutron 101 Slides with some more Information and caveats on the new DVR and Gateway HA Features
Learn about faster ways to solve AWS networking problems by using Reach. Reach is a tool that lets you examine reachability in your AWS networks, rapidly debug connectivity issues, assert network expectations during CI/CD pipelines, and more.
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...Dan Mihai Dumitriu
OpenStack deployments for public or private clouds require overlay networking. Due to the scale and rate of change of virtual resources, it isn't practical to rely on traditional network constructs and isolation mechanims. Today's deployments require performance, resilience, and high availability to be considered truly production-ready. In this session, we deep dive into the MidoNet architecture, and process of sending a data packet across an OpenStack environment through a network overlay. A distributed architecture implements logical constructs that are used to build networks without a single point of failure, all while adding network functionality in a highly-scalable manner. Network functions are applied in a single virtual hop. By applying network services right at the ingress host, the network is free from unnecessary clogging and bottlenecks by avoiding additional hops. Packets reach their destination more efficiently with the single virtual hop. After this session, the audience will understand how distributed architectures allow efficient networking with routing decisions and network services applied at the edge. Also, the audience will understand how it is easier to scale clouds when the network intelligence is distributed.
Overview of OpenStack nova-networking evolution towards Neutron. Architecture overview of OVS plugin, ML2, and MidoNet Overlay product. Overview and example of Heat templates, along with automation of physical switches using Cumulus
Designed for IT professionals looking to expand their OpenStack Networking knowledge, “Navigating OpenStack Networking” is a comprehensive and fast-paced session which provides an overview of OpenStack Networking, its history, its predecessor (Nova Networks), its components and then dives deep into the architecture, its features and plugin model and its role in building an OpenStack Cloud.
Software Defined Networking is seeing a lot of momentum these days. With server virtualization solving the virtual machines problem, and large scale object storage solving the distributed storage challenge, SDN is seen as key in virtual networking.
In this talk we don't try to define SDN but rather dive straight into what in our opinion is the core enabled of SDN: the virtual switch OVS.
OVS can help manage VLAN for guest network isolation, it can re-route any traffic at L2-L4 by keeping forwarding tables controlled by a remote controller (Openfow controller). We show these few OVS capabilities and highlight how they are used in CloudStack and Xen.
Xen Summit presentation of CloudStack and Software Defined Networks. OpenVswitch is the default bridge in Xen and supported in XenServer and Xen Cloud Platform
As enterprises move to the cloud, robust connectivity is often an early consideration. AWS Direct Connect provides a more consistent network experience for accessing your AWS resources, typically with greater bandwidth and reduced network costs. This session dives deep into the features of AWS Direct Connect and VPNs. We discuss deployment architectures and demonstrate the process from start to finish. We’ll show you how to configure public and private virtual interfaces, configure routers, use VPN backup, and provide secure communication between sites by using the AWS VPN CloudHub.
(ARC401) Black-Belt Networking for the Cloud Ninja | AWS re:Invent 2014Amazon Web Services
Do you need to get beyond the basics of VPC and networking in the cloud? Do terms like virtual addresses, integrated networks and network monitoring get you motivated? Come discuss black-belt networking topics including floating IPs, overlapping network management, network automation, network monitoring, and more. This expert-level networking discussion is ideally suited for network administrators, security architects, or cloud ninjas who are eager to take their AWS networking skills to the next level.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
3. AWS Infrastructure Components
Region
AZ
AZ
VPC
Security Group
Security Group Routing Table
Instance
Instance
Internet
GW
VPC router
Subnet
Virtual Private GW
Internet
Elastic IP
Corporate
Instance
Eni
(interface)
Customer
GW
Eni
Instance Subnet
Regions - used to manage network latency and regulatory
compliance per country. No Data replication outside a
region
Availability Zones - at least two in a region.
Designed for fault isolation. Connected to multiple ISPs
and different power sources. Interconnected using LAN
speed for inter-communications within the same region
VPC – spans all region’s AZs. Used to create isolated
private cloud within AWS. IP ranges allocated by the
customer.
Networking – interfaces, subnets, routing tables and
gateways (Internet, NAT and VPN).
Security – security groups
Interface (ENI) – can include primary, secondary or elastic
IP. Security group attaches to it. Independent from the
instance (even though primary interface cannot be
detached from an instance).
Subnet – connects one or more ENIs, can talk to another
subnet only through a L3 router. Can connected to only
one routing table. Cannot span AZs
Routing table – decides where network traffic goes. May
connect to multiple subnets. 50 routes limit per table
4. VPC Security Components
Security group - virtual firewall for the
instance to control inbound and outbound
traffic.
• Applied on ENI (instance) only
• No deny rules
• Stateful – return traffic implicitly
allowed
• All rules evaluated before decision
• Up to five per instance
Network ACL- virtual IP filter on the subnet
level
• Applied on subnet only
• Allows deny rules
• Stateless – return traffic should be
specified
• First match takes
Region
AZ
AZ
VPC
Security Group
Security Group Routing Table
Instance
Instance
Internet
GW
VPC router
Subnet
Virtual Private GW
Internet
Elastic IP
Corporate
Instance
Eni
(interface)
Customer
GW
Eni
Instance Subnet
Network ACL
5. Network segmentation
VPC isolation – the best way for separating
customers (obvious) and different organizations
without messing with security groups.
• AWS VPC – Great even for internal zoning. No need for policies
• Security group – Statefull and flexible. Network location agnostic
• Network ACL – good for additional subnet level control
VPC R&D
Security Group
Security Group Routing Table
Instance
Instance
VPC router
Subnet
Virtual Private GW
Instance
Eni (interface)
Eni
Instance Subnet
Network ACL
VPC - Production
Security Group
Security Group Routing Table
Instance
Instance
VPC router
Subnet
Virtual Private GW
Instance
Eni (interface)
Eni
Instance Subnet
Network ACL
Implicit network segmentation
Explicit instance segmentation
Explicit network segmentation
6. VPC integration with other AWS services
Elastic load balancing –
• Types – Classic and
application
• Classic is always
Internet exposed
• Application LB can be
internal
• ELB always sends traffic
to private IP backends
• Application ELB can
send traffic to
containers
Region
AZ
AZ
VPC
Security Group
Security Group Routing Table
Instance
Instance
Internet
GW
VPC router
Subnet
Virtual Private
GW
Internet
Elastic IP
Corporate
Instance
Eni
(interface)
Customer
GW
Eni
Instance Subnet
Internal ELB
Outer ELB
7. VPC integration with other AWS services
AWS Simple Storage Service
(S3) -
• Opened to the internet
• Data never spans multiple
regions unless transferred
• Data spans multiple AZs
• Connected to VPC via a
special endpoint
• The endpoint considered
and interface in the
routing table
• Only subnets connected
to the relevant routing
table can use the
endpoint
Region
AZ
AZ
VPC
Security Group
Security Group Routing Table
Instance
Instance
Internet
GW
VPC router
Subnet
Virtual Private
GW
Internet
Elastic IP
Corporate
Instance
Eni
(interface)
Customer
GW
Eni
Instance Subnet
Endpoint
8. VPC integration with other AWS services
Lambda –
• Service that runs a
selected customer’s code
• Runs over a container
located in AWS own
compute resources
• Initiate traffic from an IP
outside of the VPC
• Single Lambda function
can access only one VPC
• Traffic from Lambda to
endpoints outside the
VPC should be explicitly
allowed on the VPC
Region
AZ
AZ
VPC
Security Group
Security Group Routing Table
Instance
Instance
Internet
GW
VPC router
Subnet
Virtual Private
GW
Internet
Elastic IP
Corporate
Instance
Eni
(interface)
Customer
GW
Eni
Instance Subnet
Endpoint
9. Inter-region interface
VPC isolation – the best way for separating
customers (obvious) and different organizations
without messing with security groups.
• Complete isolation
• Only through the internet over VPN connection
Amsterdam
Security Group
Security Group Routing Table
Instance
Instance
VPC router
Subnet
Virtual Private GW
Instance
Eni (interface)
Eni
Instance Subnet
US Virginia
Security Group
Security Group Routing Table
Instance
Instance
VPC router
Subnet
Virtual Private GW
Instance
Eni (interface)
Eni
Instance Subnet
Network ACL
Internet
Internet GWInternet GW
11. Containerized applications networking
What are we looking for?
• Service discovery – automated reachability knowledge sharing
between networking components
• Deployment – standard and simple. No heavy network experts
involvement.
• Data plane – direct access (no port mapping), fast and reliable
• Traffic type agnostic – multicast, IPv6
• Network features - NAT, IPAM, QoS
• Security features - micro-segmentation, access-control, encryption…
• Public cloud ready – Multi VPC and AZs support, overcome route
table, costs.
• Public cloud agnostic - dependency on the provider’s services – as
minimal as possible
12. Three concepts around
• Overlay – A virtual network decoupled from the underlying physical
network using a tunnel (most common - VXLAN)
• Underlay – attaching to the physical node’s network interfaces
• Native L3 routing - L3 routing, advertising containers/pod networks
to the network. No overlay
13. Overlay only approach
Implementations – Flannel, Contiv, Weave, Nuage
Data Plane –
• Underlying network transparency
• Via kernel space – much less network latency
• Overhead - adds 50 bytes to the original header.
• Traffic agnostic- Passes direct L2 or routed L3
traffic between two isolated segments.
IPv4/IPV6/Multicast
Control plane –
• Service and network discovery – key/value store
(etcd, Consul …)
• VNI field - identifies layer 2 networks allowing
isolation between them. Routing between two
separate L3 networks – via an external router
(VXLAN aware)
• VTEP (VXLAN tunnel endpoints) – The two virtual
interfaces terminating the tunnel. Instances’ vNIC
14. Underlay - MACVLAN
• Attaches L2 network to the node’s physical
interface by creating a sub-interface
• Each of the sub-interfaces use different MAC
• Pod belongs to the attached network will be
directly exposed to the underlying network w/o
port mapping or overlay
• Bridge mode - most commonly used, allows
pods, containers or VMs to internally
interconnect - traffic doesn’t leave the host
• AWS –
• Disable IP src/dst check
• Promiscuous mode on the parent nic
• Verify MAC address per NIC limitation
POD
Container Container
Bridge
Eth0.45
MACVLAN
veth
eth0
Eth0
EXT
15. Native L3 only approach
Implementation – Calico, Romana
Data Plane –
• No overlays – direct container to container (or pod) communications using
their real IP addresses, leveraging routing decisions made by container
hosts and network routers (AWS route table)
Control plane –
• Containers/Pods/Services IPs being published to the network using routing
protocol such as BGP
• Optional BGP peering – between containers nodes for inter-container
communications and/or with upstream router for external access
• Large scale – route-reflector implementation may be used
• Due to the L3 nature, native IPv6 is supported
• Optionally NAT is supported for outgoing traffic
16. Networking models - Comparison
CategoryModel Overlay L3 routing Comments
Simple to deploy Yes No L3 BGP requires routing config
Widely used Yes No VXLAN – supported by most plugins
Traffic type
agnostic
Yes Yes* *Driver support dependent
Allows IP
duplication
Yes No L3 need address management
Public Cloud
friendly
Yes No
L3 – requires special config on AWS routing
tables*
HA - two different AZ’s subnets still
requires tunneling**
Host local
routing
No Yes
Inter-subnet routing on same host goes out
External plugins – overcome – split routing
Underlying
network
independency
Yes No
L3 needs BGP peering config for external
comm.
Performance Yes* Yes
*Depends on data path – user or kernel
space
Network
Efficiency
No Yes Overlay adds overhead
17. Common Implementation Concepts
• The majority of plugins – combine overlay (mostly VXLAN) and L3
• Subnet allocated per node (Nuage is an exception)
• Based on agent installed on the node (project proprietary or Open
vSwitch)
• Local routing on the node between different subnets
• Support routing to other nodes (needs L2 networks between
nodes)
• Public clouds integration provided for routing table update (limited
comparing to standard plugins)
• Performance - Data path in Kernel space
• Distributed or policy based (SDN)
18. Flannel (CoreOS) – The proprietary example
• Used for dual OVS scenarios (Openstack)
• Flanneld agent on the node – allocates a subnet to the node and register it in
the etcd store installed on each node
• No Security policy currently supported - A new project, Canal, combines Flannel
and Calico for a whole network and security solution
• Subnet cannot span multiple hosts
Three implementations:
• Overlay – UDP/VXLAN – etcd used for control plane
• Host-gw – direct routing over L2 network with routing on the node’s routing
table – Can be used in AWS also – preforms faster
• AWS-VPC – direct routing over AWS VPC routing tables. Dynamically updates the
AWS routing table (50 route entries limits in a routing table. If more needed,
VXLAN can be used).
22. PAUSE
Container
PAUSE
Container
etcd DNS
Docker
POD
• A group of one or more
containers
• Contains containers which
are mostly tightly related
• Mostly ephemeral in nature
• All containers in a pod are in
the same physical node
• The PAUSE container
maintains the networking
KUBE-PROXY
• Assigns a listening port for a
service
• Listen to connections
targeted to services and
forwards them to the
backend pod
• Two modes – “Userspace”
and “IPTables”
Kubelet
• Watches for pods
scheduled to node and
mounts the required
volumes
• Manages the containers via
Docker
• Monitors the pods status
and reports back to the
rest of the system
Replication Controller
Ensures that a specified
number of pod “replicas” are
running at any time
Creates and destroys pods
dynamically
DNS
Maintains DNS server for the
cluster’s services
etcd
Key/Value store the API server
All cluster data is stored here
Access allowed only to API server
API Service
The front-end for the
Kubernetes control plane. It is
designed to scale horizontally
23. So why not Docker’s default networking?
• Non-networking reason - drivers integration issues and low level
built-in drivers (at least initially)
• Scalability (horizontality) – Docker’s approach to assign IPs directly
to containers limits scalability for production environment with
thousands of containers. Containers network footprint should be
abstracted
• Complexity – Docker’s port mapping/NAT requires messing with
configuration, IP addressing management and applications’ external
port coordination
• Nodes resource and performance limitation – Docker’s port mapping
might suffer from ports resource limitations. In addition, extra
processing required on the node
• CNI model was preferred over the CNM, because of the container
access limitation
24. Kubernetes native networking
• IP address allocation – IP give to pods rather that to containers
• Intra-pod containers share the same IP
• Intra-pod containers use localhost to inter-communicate
• Requires direct multi-host networking without NAT/Port mapping
• Kubernetes doesn’t natively give any solution for multi-host
networking. Relies on third party plugins: Flannel, Weave, Calico,
Nuage, OVS etc.
• Flannel was already discussed previously as an example to overlay
networking approach
• OVS will be discussed later as OVS based networking plugins
• Nuage solution will be separately dicussed
25. Kubernetes - Pod
When POD is created with containers, the following
happens:
• “PAUSE” container created –
• “pod infrastructure” container – minimal config
• Handles the networking by holding the networking
namespace, ports and IP address for the
containers on that pod
• The one that actually listens to the application
requests
• When traffic hits, it’s redirected by IPTABLES to the
container that listens to this port
• “User defined” containers created –
• Each use “mapped container” mode to be linked
to the PAUSE container
• Share the PAUSE’s IP address
apiVersion: v1
kind: Pod
metadata:
labels:
deployment: docker-registry-1
deploymentconfig: docker-registry
docker-registry: default
generateName: docker-registry-1-
spec:
containers:
- env:
- name: OPENSHIFT_CA_DATA
value: ...
- name: OPENSHIFT_MASTER
value: https://master.example.com:8443
ports:
- containerPort: 5000
protocol: TCP
resources: {}
securityContext: { ... }
dnsPolicy: ClusterFirst
26. Kubernetes - Service
• Abstraction which defines a logical set of pods and a
policy by which to access them
• Mostly permanent in nature
• Holds a virtual IP/ports used for client requests (internal
or external)
• Updated whenever the set of pods changes
• Use labels and selector for choosing the backend pods
to forward the traffic to
When a service is created the following happens:
• IP address assigned by the IPAM service
• The kube-proxy service on the worker node, assigns a
port to the new service
• Kube-proxy generates iptables rules for forwarding the
connection to the backend pods
• Two Kube-proxy modes….
apiVersion: v1
kind: Service
metadata:
name: docker-registry
spec:
selector:
docker-registry: default
portalIP: 172.30.136.123
ports:
- nodePort: 0
port: 5000
protocol: TCP
targetPort: 5000
27. Kube-Proxy Modes
Kube-Proxy always writes the Iptables rules, but what actually handles the
connection?
Userspace mode – Kube-proxy is the one that forwards connections to
backend pods. Packets move between user and kernel space which adds
latency, but the application continues to try till it finds a listening backend
pod. Also debug is easier
Iptables mode – Iptables from within the kernel, directly forwards the
connection to the pod. Fast and efficient but harder to debug and no retry
mechanism
IptablesConnection Connection
Kube-proxy
service
Connection pod
Writes iptables rules according to service definition
IptablesConnection
Kube-proxy
service
pod
Writes iptables rules according to service definition
28. OpenShift SDN - Open Vswitch (OVS) – The foundation
• A multi-Layer open source virtual switch
• Doesn’t support native L3 routing need the Linux kernel or external componenet
• Allows network automation through various programmatic interfaces as well as
built-in CLI tools
• Supports:
• Port mirroring
• LACP port channeling
• Standard 802.1Q VLAN trunking
• IGMP v1/2/3
• Spanning Tree Protocol, and RTSP
• QoS control for different applications, users, or data flows
• port level traffic policing
• NIC bonding, source MAC addresses LB, active backups, and layer 4 hashing
• OpenFlow
• Full IPv6 support
• kernel space forwarding
• GRE, VXLAN and other tunneling protocols with additional support for outer IPsec
29. Linux Network Namespace
• Logically another copy of the network
stack, with its own routes, firewall rules,
and network devices
• Initially all the processes share the same
default network namespace from the
parent host (init process)
• A pod is created with a “host container”
which gets its own network namespace
and maintains it
• “User containers” within that pod join
that namespace POD
Container Container
localhost
PAUSE
br0 (OVS)
POD
Container Container
localhost
PAUSE
Default
Namespace
eth0
veth
eth0
eth0
veth
Namespace
30. OpenShift SDN - OVS Management
Tun0 - OUT
User space
Kernel space
vETH
ovs-vsctl ovs-dpctlovs-appctl
Container
OVSDB – configures and
monitors the OVS itself
(bridges, ports..)
Configures and monitors the
ovsdb: bridges, ports, flows
OpenFlow – programs the
OVS daemon with flow
entries for flow-based
forwarding
The actual OVS daemon
• Process openflow messages
• Manage the datapath (which
actually in kernel space)
• Maintain two flow table
(exactly flow & wildcard flow)
Sends commands to OVS daemon
Example: MAC tables
Configures and monitors the
OVS kernel module
31. OpenShift SDN – L3 routing
1. OVS doesn’t support native
L3 routing
2. L3 routing between two
subnets done by the parent
host’s networking stack
3. Steps (one alternative)
1. Creating two per-VLAN
OVS
2. Creating two L3 sub-
interfaces on the parent
host
3. Bridging the two sub-
interfaces to both OVS
bridges
4. Activating IP forwarding
on the parent host
Eth0.1
0
Eth0.2
0
Note: The L3 routing can be done using
plugins such as Flannel, Weave and others
32. 10.1.0.2
POD PAUSE
Contai
ner
Contai
ner
localhost
veth
eth0
OpenShift SDN – local bridges and interfaces on the host
1. Node is registered and
given a subnet
2. Pod crated by OpenShift
and given IP from Docker
bridge
3. Then moved to OVS
4. Container created by
Docker engine given IP
from Docker bridge
5. Stays connected to lbr0
6. No network duplication –
Docker bridge only for
IPAM
Lbr0 - Docker bridge Br0 - OVS
10.1.0.1 - GW
IPAM
tun0vxlan0
port1 port2
Container
eth0
10.1.0.1 - GW
OpenShift
schedules
pod
Docker
Schedules
pod
10.1.0.3
vlinuxbr vovsbr
port3
10.1.0.2
POD PAUSE
Contai
ner
Contai
ner
localhost
veth
eth0
33. OpenShift SDN – Overlay
• Control plane – etcd
stores information related
to host subnets
• Initiated from the node’s
OVS via the nodes NIC
(vtep – lbr0)
• Traffic encapsulated into
OVS’s VXLAN interface
• When ovs-multitenant
driver used – projects can
be identified by VNIDs
• Adds 50 bytes to the
original frame
UDP/4789
SRC – 10.15.0.1
DST – 10.15.0.2
14 Bytes 20 Bytes
8 Bytes 8 Bytes
Node1
10.1.0.0/24
BR0 (OVS)
VTEP
Dmz pod
10.1.0.2
Inner pod
10.1.0.3
Node2
10.1.1.0/24
BR0 (OVS)
VTEP
Dmz pod
10.1.1.2
Inner pod
10.1.1.3
8 Bytes
Master
etcd
Eth0 - 10.15.0.2
Eth0 - 10.15.0.3
Eth0 - 10.15.0.1
34. OpenShift SDN – plugin option1
OVS-Subnet – the original
driver
• Creates flat network
allows all pod to inter-
communicate
• No network segmentation
• Policy applied on the OVS
• No significance to project
membership Node1
10.1.0.0/24
BR0 (OVS)
VTEP
Dmz pod
10.1.0.2
Inner pod
10.1.0.3
Node2
10.1.1.0/24
BR0 (OVS)
VTEP
Dmz pod
10.1.1.2
Inner pod
10.1.1.3
VXLAN
Eth0 - 10.15.0.2
Eth0 - 10.15.0.3
35. OpenShift SDN – plugin option 2
OVS-Multitenant –
• Each projects gets a
unique VNID – identifies
pods in that project
• Default projects – VNID 0
– communicate with all
others (Shared services)
• Pods’ traffic inspected
according to its project
membership
Node1
10.1.0.0/24 BR0 (OVS)
VTEP
Dmz pod
10.1.0.2
Inner pod
10.1.0.3
Node2
10.1.1.0/24
BR0 (OVS)
VTEP
Dmz pod
10.1.1.2
Inner pod
10.1.1.3
VXLAN
Project A
Vnid 221
Project B
vnid 321
Project A
Vnid 221
Project B
vnid 321
Eth0 - 10.15.0.2
Eth0 - 10.15.0.3
36. OpenShift – service discovery - alternatives
App to App – preferably using Pod-to-Service, avoid Pod-to-Pod
Environment variables –
• Injected to the pod with connectivity info (user names, service IP..)
• For updates, pod recreation is needed
• Destination service must first created (or restarted in case they were created
before the pod)
• Not a real dynamic discovery…
DNS – SkyDNZ – serving <cluster>.local suffixes
• Split DNS - supports different resolution for internal and external
• SkyDNS installed on master and pods are configured by default to use it first
• Dynamic - no need to recreate the pods for any service update
• Newly created services being detected automatically by the DNS
• For direct pod-to-pod connection (no service) – DNS round robin can be used
37. VPC
OpenShift – Internal customer services consumption
Yum repos, Docker registry, SMTP,
LDAP
Leveraging the VPN tunnels from
AWS to the customer DMZ
1. The node connects to the
requested service proxy in the
customer DMZ
2. The proxy initiates the request
for the service sources by its
own IP – allowed by customer
firewall
OpenShift node
Eth0 -
10.15.0.1
Service proxy
Internet
Virtual Private
GW
Customer
DMZ
Customer
GW
Customer LAN
Docker Reg, Repos, LDAP, SMTP
38. Routing and Load balancing
Requirements
Network discovery
Alternatives
39. 10.1.0.0/16
Routing – alternative 1
OpenShift router –
• For WEB apps – http/https
• Managed by users
• Routes created in project level and
added to the router
• Unless shared, all routers see all
routes
• For traffic to come in, admin needs
to add a DNS record for the router
or using wildcard
• Default - Haproxy container - listens
on the host’s IP and proxies traffic to
the pods
Master – AWS
instance Node2 – AWS
instance
Node1 – AWS
instance
Eth0 - 10.15.0.2
Eth0 - 10.15.0.3
Eth0 - 10.15.0.1
Haproxy
router
https://service.aws.com:8080
Service
web-srv1:80
Service
web-srv2:80
DNS
40. Routing – alternative 2
Standalone Load Balancer –
• All traffic types
• Alternatives are:
1. AWS ELB
2. Dedicated Cluster node
3. Customized Haproxy pod
• IP Routing towards the internal
cluster network – discussed
later
AWS ELB
1- service.aws.com:80
10.1.0.0/16
Master – AWS
instance Node2 – AWS
instance
Node1 – AWS
instance
Eth0 - 10.15.0.2
Eth0 - 10.15.0.3
Eth0 - 10.15.0.1
Service
web-srv1:80
Service
web-srv2:80
DNS
Haproxy
pod
3 - service.aws.com:80
Node3 –
Haproxy
2- service.aws.com:80
41. 10.1.0.0/16
Routing – alternative 3
Service external IP–
• Managed by Kubernetes Kube-
Proxy service on each node
• The proxy assigns the IP/port
and listens to incoming
connections
• Redirects traffic to the pods
• All types of traffic
• Admin should take care of
routing traffic towards the node
• Iptables-based – all pods
should be ready listen
• User space - try all pods till it
finds
Master – AWS
instance Node2 – AWS
instance
Node1 – AWS
instance
Eth0 - 10.15.0.2
Eth0 - 10.15.0.3
Eth0 - 10.15.0.1
service.aws.com:80
pod
web-srv1:80
pod
web-srv2:80
DNS
Kube-proxy
service
42. 10.1.0.0/16
OpenShift@AWS – LB routing to cluster network
Concern – network routing towards
the cluster network
Option 1 – AWS ELB
1. Forwards to the OpenShift
node’s IP using port mapping
2. Need application ports
coordination - Manageability
issues
3. Excessive IPtables for port
mapping manipulation – prone
to errors
4. Dependency on AWS services
Master – AWS
instance
Node2 – AWS
instance
Node1 – AWS
instance
Eth0 - 10.15.0.2
Eth0 - 10.15.0.3
Eth0 - 10.15.0.1
ELB
https://service.aws.com:8080
:8080
IPtables
Service
web-srv1:80
Service
web-srv2:80
43. 10.1.0.0/16
OpenShift@AWS – LB routing to the cluster network
Concern – network routing
towards the cluster network
Option 2 – Tunneling
1. Tunnel the external Haproxy
node to the cluster via a ramp
node
2. Required extra configuration –
complexity
3. Extra tunneling –
performance issues
4. You need this instance to be
continuously up - costly
5. AWS independency
Master – AWS
instance
Node2 – AWS
instance
Node1 – AWS
instance
192.168.0.2/30
Eth0 - 10.15.0.2
Eth0 - 10.15.0.3
Eth0 - 10.15.0.1
Haproxy
https://service.aws.com:8080
192.168.0.1/30
Route to 10.1.0.0/16 via 192.168.0.2
Service
web-srv1:80
Service
web-srv2:80
44. 10.1.0.0/16
OpenShift@AWS – LB routing to the cluster network
Concern – network routing towards
the cluster network
Option 3 – Haproxy move to cluster
1. Put the LB in a LB-only cluster
node - disable scheduling
2. Service URL resolved to the
node’s IP
3. Full routing knowledge of the
cluster
4. Simple and fast – native routing
5. AWS independency
6. You need this instance to be
continuously up – costly
Master – AWS
instance
Node2 – AWS
instance
Haproxy Node –
AWS instance
Eth0 - 10.15.0.2
Eth0 - 10.15.0.3
Eth0 - 10.15.0.1
https://service.aws.com:8080
Service
web-srv1:80
Service
web-srv2:80
45. 10.1.0.0/16
OpenShift@AWS – LB routing to the cluster network
Concern – network routing
towards the cluster network
Option 4 – Haproxy container
1. Create Haproxy container
2. Service URL resolved to the
container’s IP
3. Full routing knowledge of
the cluster
4. AWS independency
5. Use cluster overlay network
– native
6. Overlay network - being
used anyway
Master – AWS
instance
Node2 – AWS
instance
Node1 – AWS
instance
Eth0 - 10.15.0.2
Eth0 - 10.15.0.3
Eth0 - 10.15.0.1
Haproxy
pod
https://service.aws.com:8080
Service
web-srv1:80
Service
web-srv2:80
Eth0 – 10.1.0.20
47. Customer
AWS
AWS level resource access
• Creating VPC, instance, network
services, storage services…..
• Requires AWS AAA
• Managed only by AWS
OS level access
• SSH or RDP to the instance’s OS…..
• Requires OS level AAA or
certificates
• Managed by the customer
ELB, Lambda
and application
related services
are optional.
Not considered
part of the
shared trust
model
Shared Security responsibility
Model
48. Intra-pod micro-segmentation
• For some reason, someone put
containers with different
sensitivity level within the
same pod
• OVS uses IP/MAC/OVS port for
policy (pod only attributes)
• No security policy or network
segmentation applied to intra-
pod containers
• Limited connections or TCP
ports blocks - tweaks that
won’t help to deal with the
newly discovered exploit
• “Security Contexts” feature
doesn’t apply to intra-pod
security but to the pod level
• It should be presumed that
containers in a pod share the
same security level!
POD PAUSE
Container Container
TAP
POD PAUSE
DMZ DRMS
TAP
10.10.10.1 publicly exposed10.10.10.2
SDN Controller
localhost localhost
Compromised
BR-ETH
Attacker access the DMZ service’s public IP
DMZ service
From github’s pod-security-context project page:
“We will not design for intra-pod security; we are not currently concerned about isolating
containers in the same pod from one another”
49. Three options:
1. Separated clusters – DMZ and SECURE – different networks – implicit
network segmentation – expensive but simple for short term
2. Separated nodes same cluster – DMZ nodes and SECURE nodes –
applying access control using security groups – communication freely
allowed across the cluster – doesn’t give real segmentation with ovs-
subnets
3. Using OpenShift’s ovs-multitenant driver – gives micro-segmentation
using projects' VNIDs
OpenShift SDN – network segmentation
50. Option 1 - Cluster level segmentation
K8S Secure cluster
10.1.0.0/16
Node2 -
internal
Node1 -
internal
Eth0 - 10.15.0.3
App1 10.1.0.1
Eth0 - 10.15.0.2
Master
etcd
App2 10.1.1.1
Only specific
port allowed
K8S Exposed cluster
10.1.0.0/16
Node2 - DMZ
Node1 - DMZ
Eth0 - 10.15.0.3
App1 10.1.0.1
Eth0 - 10.15.0.2
VPC Network
Master
etcd
App2 10.1.1.2
Internet
Security group
Allows 33111
No shared service discovery knowledge
No Network visibility of addresses and ports
Lots of barriers and dis-information about the cluster services
51. Option 2 - Node segregated – same cluster
1. Exposed app1 has been
compromised
2. Cluster service discovery may
be queried for various cluster
networking knowledge – IPs,
ports, services, pods
3. Other pods and services
viability, port scanning can be
invoked and exploited
4. Other sensitive apps might be
harmed
The cluster gives the knowledge
freedom and tools for further
hacking actions
K8S cluster
10.1.0.0/16
Node2 -
internal
Node1 - DMZ
Eth0 - 10.15.0.3
App1 10.1.0.1
Eth0 - 10.15.0.2
VPC Network
Master
etcd
Service discovery – full
cluster knowledge
App2 10.1.1.1
Node2 -
internal
Eth0 - 10.15.0.3
App3 10.1.1.1
Internet
Security group
Allows 33111
52. OpenShift SDN – network segmentation
OVS-Subnet plugin
• All projects are labeled with VNID 0
So they allowed to communicate
with all other pods and services
• No network segmentation
• Other filter mechanism required:
OVS flows, Iptables, micro-
segmentation solutions
Node2
10.15.0.3 BR0 (OVS)
VTEP
Node1
10.15.0.2
BR0 (OVS)
VTEP
VXLAN
dmz inner
dmz inner
DMZ Service
VPC
network
Inner Service
Br0 allows the traffic
53. Node1
10.15.0.2
10.1.0.0/24
Node2
10.15.0.3
10.1.0.1/24
OpenShift SDN – network segmentation
ovs-multitenant SDN plugin
OpenShift default project – VNID 0 –
Allows access to/from all
All non-default projects – given unique
VNID in case they are not joined
together
Pods – get their network association
according to their project membership
Pod can access another pod or service
only if they belong to the same VNID
otherwise OVS blocks them
Project A
Vnid 221
BR0 (OVS)
VTEP
Project A
Vnid 321
VPC
network
Project A
vnid221
BR0 (OVS)
VTEP
Project A
Vnid 321
VXLAN
Dmz
pod
Inner
pod
Dmz
pod
Inner
pod
Etcd -
Project to vnid
mapping
Network
namespace
54. Node2
10.15.0.3
Controlled isolation - Egress Router
• A privileged pod
• Redirects pod’s traffic to a
specified external server when it
allows connections from specific
sources
• Can be called via a K8S service
• Forwards the traffic outside using
its own private IP then it gets
NATed by the node
• Steps –
• Creates MACVLAN interface on
the primary node interface
• Moves this interface to the
egress router pod’s namespace
Project A
VNID 221
BR0 (OVS)
VTEP
Project A
VNID 321Dmz
pod
Inner
pod
Server
234.123.23.2
VPC
network
internet
Customer
DMZ
Server
Egress
router pod
Eth0.30
10.15.0.20
SRC 10.1.0.2
DST 234.123.23.2
Network
namespace
SRC 10.15.0.20
DST 234.123.23.2
Etcd -
Project to vnid
mapping
55. Node2
10.15.0.3
Controlled isolation – Gateway pod
• Created on the project level
• Can be applies only to
isolated pods (not default or
joined)
• Can be used to open specific
rules to an isolated app
• If pod needs access to specific
service belongs to different
project, you may add
EgressNetworkPolicy to the
source pod’s project
Project A
VNID 221
BR0 (OVS)
VTEP
Project A
VNID 321Dmz
pod
Inner
pod
VPC
network
internet
Server
HAPROXY/FW
vnid0
Eth0.30
10.15.0.20
SRC 10.1.0.2
DST 234.123.23.2
Network
namespace
SRC 10.15.0.20
DST 234.123.23.2
Etcd -
Project to vnid
mapping
56. 10.1.0.0/16
OpenShift SDN – L3 segmentation use case
1. Secure and DMZ subnets
2. Pods scheduled to multiple
hosts and connected to
subnets according to their
sensitivity level
3. Another layer of segmentation
4. More “cloudy” method as all
nodes can be scheduled
equally with all types of PODs
5. Currently doesn’t seem to be
natively supported
6. Nuage plugin supports this
Master
Node2
Node1
Eth0 - 10.15.0.2
Eth0 - 10.15.0.3
Eth0 - 10.15.0.1
secure pod1
secure pod2DMZ pod2
Secure Service
DMZ Service
DMZ pod1
Internet
LB
10.1.0.0/24 - Secure
10.1.1.0/24 - DMZ
10.1.0.0/24 - Secure
10.1.1.0/24 - DMZ
57. AWS security groups inspection
Concern – Users may attach
permissive security groups to
instances
Q - Security group definition – manual
or automatic?
A - Proactive way –
• Wrapping security checks into
continuous integration
• Using subnet-level Network ACL
for more general deny rules –
allowed only to sec admins
• Using third party tools: Dome9..
A - Reactive way –
Using tools such as aws_recipes and
Scout2 (nccgroup) to inspect
Lots to be discussed
Region
AZ
AZ
VPC
Security Group
Security Group Routing Table
Instance
Instance
Internet
GW
VPC router
Subnet
Virtual Private GW
Internet
Elastic IP
Corporate
Instance
Eni
(interface)
Customer
GW
Eni
Instance Subnet
Network ACL
Admin controlUser control