The document provides an overview of Red Hat OpenShift Container Platform, including:
- OpenShift provides a fully automated Kubernetes container platform for any infrastructure.
- It offers integrated services like monitoring, logging, routing, and a container registry out of the box.
- The architecture runs everything in pods on worker nodes, with masters managing the control plane using Kubernetes APIs and OpenShift services.
- Key concepts include pods, services, routes, projects, configs and secrets that enable application deployment and management.
In this session, Diógenes gives an introduction of the basic concepts that make OpenShift, giving special attention to its relationship with Linux containers and Kubernetes.
Kubernetes 101 - an Introduction to Containers, Kubernetes, and OpenShiftDevOps.com
Administrators and developers are increasingly seeking ways to improve application time to market and improve maintainability. Containers and Red Hat® OpenShift® have quickly become the de facto solution for agile development and application deployment.
Red Hat Training has developed a course that provides the gateway to container adoption by understanding the potential of DevOps using a container-based architecture. Orchestrating a container-based architecture with Kubernetes and Red Hat® OpenShift® improves application reliability and scalability, decreases developer overhead, and facilitates continuous integration and continuous deployment.
In this webinar, our expert will cover:
An overview of container and OpenShift architecture.
How to manage containers and container images.
Deploying containerized applications with Red Hat OpenShift.
An outline of Red Hat OpenShift training offerings.
In this session, Diógenes gives an introduction of the basic concepts that make OpenShift, giving special attention to its relationship with Linux containers and Kubernetes.
Kubernetes 101 - an Introduction to Containers, Kubernetes, and OpenShiftDevOps.com
Administrators and developers are increasingly seeking ways to improve application time to market and improve maintainability. Containers and Red Hat® OpenShift® have quickly become the de facto solution for agile development and application deployment.
Red Hat Training has developed a course that provides the gateway to container adoption by understanding the potential of DevOps using a container-based architecture. Orchestrating a container-based architecture with Kubernetes and Red Hat® OpenShift® improves application reliability and scalability, decreases developer overhead, and facilitates continuous integration and continuous deployment.
In this webinar, our expert will cover:
An overview of container and OpenShift architecture.
How to manage containers and container images.
Deploying containerized applications with Red Hat OpenShift.
An outline of Red Hat OpenShift training offerings.
OpenShift is Red Hat's Platform-as-a-Service (PaaS) that lets developers quickly develop, host, and scale Docker container-based applications. OpenShift enables a uniform and standardised approach to container management across all hosting options including AWS/EC2 and other private/public cloud and on/off-premise variants. At this session, you will learn how Red Hat's enterprise clients are using OpenShift to enable their digital transformation initiatives. Examples will cover how realising a hybrid cloud strategy can simplify and reduce the risk of migrating and transitioning application workloads to containers in the cloud.
Alex Smith, Solutions Architect, Amazon Web Services, ASEAN
Stephen Bylo, Senior Solution Architect, Red Hat Asia Pacific Pte Ltd
Kubernetes Concepts And Architecture Powerpoint Presentation SlidesSlideTeam
Get these visually appealing Kubernetes Concepts And Architecture PowerPoint Presentation Slides to discuss the process of operating containerized applications. You can display the need for containers by the company with the help of an open-source architecture PPT slideshow. The architecture of containers can be demonstrated with the help of a visually appealing PPT slideshow. The reasons for opting for Kubernetes by an organization can be explained to your teammates with the help of containers PowerPoint infographics. Highlight the roadmap for installing Kubernetes in the organization by using content-ready PPT slides. Take the assistance of visually appealing PPT templates to depict the major advantages of Kubernetes such as improving productivity, the stability of application run, and many more. After that, display 30 60 90 days plan to implement Kubernetes in the organization. Display the key components of Kubernetes with the help of a diagram using this professionally designed cluster architecture PPT layouts. Describe the functionality of each components of Kubernetes. Hence, download Kubernetes architecture PPT slides to easily and efficiently manage the clusters. https://bit.ly/34DWa7x
Traditional virtualization technologies have been used by cloud infrastructure providers for many years in providing isolated environments for hosting applications. These technologies make use of full-blown operating system images for creating virtual machines (VMs). According to this architecture, each VM needs its own guest operating system to run application processes. More recently, with the introduction of the Docker project, the Linux Container (LXC) virtualization technology became popular and attracted the attention. Unlike VMs, containers do not need a dedicated guest operating system for providing OS-level isolation, rather they can provide the same level of isolation on top of a single operating system instance.
An enterprise application may need to run a server cluster to handle high request volumes. Running an entire server cluster on Docker containers, on a single Docker host could introduce the risk of single point of failure. Google started a project called Kubernetes to solve this problem. Kubernetes provides a cluster of Docker hosts for managing Docker containers in a clustered environment. It provides an API on top of Docker API for managing docker containers on multiple Docker hosts with many more features.
OpenShift 4, the smarter Kubernetes platformKangaroot
OpenShift 4 introduces automated installation, patching, and upgrades for every layer of the container stack from the operating system through application services.
Red Hat OpenShift is a leading enterprise Kubernetes platform1 that enables a cloud-like experience everywhere it's deployed. Whether it’s in the cloud, on-premise or at the edge, Red Hat OpenShift gives you the ability to choose where you build, deploy, and run applications through a consistent experience
Red Hat multi-cluster management & what's new in OpenShiftKangaroot
More and more organisations are not only using container platforms but starting to run multiple clusters of containers. And with that comes new headaches of maintaining, securing, and updating those multiple clusters. In this session we'll look into how Red Hat has solved multi-cluster management, covering cluster lifecycle, app lifecycle, and governance/risk/compliance.
A Comprehensive Introduction to Kubernetes. This slide deck serves as the lecture portion of a full-day Workshop covering the architecture, concepts and components of Kubernetes. For the interactive portion, please see the tutorials here:
https://github.com/mrbobbytables/k8s-intro-tutorials
OpenShift is Red Hat's Platform-as-a-Service (PaaS) that lets developers quickly develop, host, and scale Docker container-based applications. OpenShift enables a uniform and standardised approach to container management across all hosting options including AWS/EC2 and other private/public cloud and on/off-premise variants. At this session, you will learn how Red Hat's enterprise clients are using OpenShift to enable their digital transformation initiatives. Examples will cover how realising a hybrid cloud strategy can simplify and reduce the risk of migrating and transitioning application workloads to containers in the cloud.
Alex Smith, Solutions Architect, Amazon Web Services, ASEAN
Stephen Bylo, Senior Solution Architect, Red Hat Asia Pacific Pte Ltd
Kubernetes Concepts And Architecture Powerpoint Presentation SlidesSlideTeam
Get these visually appealing Kubernetes Concepts And Architecture PowerPoint Presentation Slides to discuss the process of operating containerized applications. You can display the need for containers by the company with the help of an open-source architecture PPT slideshow. The architecture of containers can be demonstrated with the help of a visually appealing PPT slideshow. The reasons for opting for Kubernetes by an organization can be explained to your teammates with the help of containers PowerPoint infographics. Highlight the roadmap for installing Kubernetes in the organization by using content-ready PPT slides. Take the assistance of visually appealing PPT templates to depict the major advantages of Kubernetes such as improving productivity, the stability of application run, and many more. After that, display 30 60 90 days plan to implement Kubernetes in the organization. Display the key components of Kubernetes with the help of a diagram using this professionally designed cluster architecture PPT layouts. Describe the functionality of each components of Kubernetes. Hence, download Kubernetes architecture PPT slides to easily and efficiently manage the clusters. https://bit.ly/34DWa7x
Traditional virtualization technologies have been used by cloud infrastructure providers for many years in providing isolated environments for hosting applications. These technologies make use of full-blown operating system images for creating virtual machines (VMs). According to this architecture, each VM needs its own guest operating system to run application processes. More recently, with the introduction of the Docker project, the Linux Container (LXC) virtualization technology became popular and attracted the attention. Unlike VMs, containers do not need a dedicated guest operating system for providing OS-level isolation, rather they can provide the same level of isolation on top of a single operating system instance.
An enterprise application may need to run a server cluster to handle high request volumes. Running an entire server cluster on Docker containers, on a single Docker host could introduce the risk of single point of failure. Google started a project called Kubernetes to solve this problem. Kubernetes provides a cluster of Docker hosts for managing Docker containers in a clustered environment. It provides an API on top of Docker API for managing docker containers on multiple Docker hosts with many more features.
OpenShift 4, the smarter Kubernetes platformKangaroot
OpenShift 4 introduces automated installation, patching, and upgrades for every layer of the container stack from the operating system through application services.
Red Hat OpenShift is a leading enterprise Kubernetes platform1 that enables a cloud-like experience everywhere it's deployed. Whether it’s in the cloud, on-premise or at the edge, Red Hat OpenShift gives you the ability to choose where you build, deploy, and run applications through a consistent experience
Red Hat multi-cluster management & what's new in OpenShiftKangaroot
More and more organisations are not only using container platforms but starting to run multiple clusters of containers. And with that comes new headaches of maintaining, securing, and updating those multiple clusters. In this session we'll look into how Red Hat has solved multi-cluster management, covering cluster lifecycle, app lifecycle, and governance/risk/compliance.
A Comprehensive Introduction to Kubernetes. This slide deck serves as the lecture portion of a full-day Workshop covering the architecture, concepts and components of Kubernetes. For the interactive portion, please see the tutorials here:
https://github.com/mrbobbytables/k8s-intro-tutorials
Cloud Native Night, April 2018, Mainz: Workshop led by Jörg Schad (@joerg_schad, Technical Community Lead / Developer at Mesosphere)
Join our Meetup: https://www.meetup.com/de-DE/Cloud-Native-Night/
PLEASE NOTE:
During this workshop, Jörg showed many demos and the audience could participate on their laptops. Unfortunately, we can't provide these demos. Nevertheless, Jörg's slides give a deep dive into the topic.
DETAILS ABOUT THE WORKSHOP:
Kubernetes has been one of the topics in 2017 and will probably remain so in 2018. In this hands-on technical workshop you will learn how best to deploy, operate and scale Kubernetes clusters from one to hundreds of nodes using DC/OS. You will learn how to integrate and run Kubernetes alongside traditional applications and fast data services of your choice (e.g. Apache Cassandra, Apache Kafka, Apache Spark, TensorFlow and more) on any infrastructure.
This workshop best suits operators focussed on keeping their apps and services up and running in production and developers focussed on quickly delivering internal and customer facing apps into production.
You will learn how to:
- Introduction to Kubernetes and DC/OS (including the differences between both)
- Deploy Kubernetes on DC/OS in a secure, highly available, and fault-tolerant manner
- Solve operational challenges of running a large/multiple Kubernetes cluster
- One-click deploy big data stateful and stateless services alongside a Kubernetes cluster
The presentation will provide a brief overview of Tungsten Fabric, and the new features in the recent 5.0 release. A demo of Tungsten Fabric will follow, with an overview of core functionality, and newly released features.
Speaker: Nick Davey, Cloud - SDN Product Manager
Nebulaworks invited Bitnami's software engineer, Adnan Abdulhussein to present on, "The App Developer's Kubernetes Toolbox."
Details:
If you're developing applications on top of Kubernetes, you may be feeling overwhelmed with the vast number of development tools in the ecosystem at your disposal. Kubernetes is growing at a rapid pace, and it's becoming impossible to keep up with the latest and greatest development environments, debuggers, and build test and deployment tools.
Learn:
• The current state of development in Kubernetes
• Comparison of shared and local Kubernetes development environments
• Overview of different development tools in the ecosystem
• Which tools make sense in common scenarios
• How Bitnami uses Kubernetes as a development environment
Reference architecture with MIRANTIS OPENSTACK PLATFORM.The changes that are going on in IT with disruptions from technology, business and culture and so IT to solve the issues has to change from moving from traditional models to broker provider model.
Introduction to Kubernetes - Docker Global Mentor Week 2016Opsta
Kubernetes is an open-source system for automating
deployment, scaling, and management of containerized
applications. This presentation will show you overview of Kubernetes concept.
Docker Global Mentor Week 2016 #DockerInThai at Kaidee on November 18, 2016
Kubernetes for java developers - Tutorial at Oracle Code One 2018Anthony Dahanne
You’re a Java developer? Already familiar with Docker? Want to know more about Kubernetes and its ecosystem for developers? During this session, you’ll get familiar with core Kubernetes concepts (pods, deployments, services, volumes, and so on) before seeing the most-popular and most-productive Kubernetes tools in action, with a special focus on Java development. By the end of the session, you’ll have a better understanding of how you can leverage Kubernetes to speed up your Java deployments on-premises or to any cloud.
Francisco Javier Ramírez Urea - IT Architect, Hoplasoftware
Guillaume Morini - SE, Docker
The integration of Kubernetes orchestration into the Docker Enterprise Platform presents deployments with interesting new abstractions for application connectivity. Devs and Ops are often challenged with rationalizing how pod networking (with CNI plugins like Calico or Flannel), Services (via kube-proxy) and Ingress work in concert to enable application connectivity within and outside a cluster. Similarly, given the dynamic and transient nature of containerized microservice workloads, how to leverage scalable and declarative approaches like network policies to express segmentation and security primitives. This session provides an illustrative walkthrough of these core concepts by going through common deployment architectures providing design, operations, and scale considerations based on experience from numerous production deployments. We will discuss Kubernetes publishing methods and deep dive into Ingress Controllers. This session will also showcase how to complement application and operations workflows with policy-driven business, compliance and security controls typically required in enterprise production deployments including going further into limiting traffic to services, session persistence, rewriting, and activating container health checks.
Interop 2018 - Understanding Kubernetes - Brian GracelyBrian Gracely
In the world of containers, Kubernetes has emerged as the dominant standard for managing how containers are deployed, monitored and managed. This talk will provide fundamental knowledge of how Kubernetes interacts with containers, storage, networking, security and application frameworks. The audience will learn about the core element of Kubernetes, including etcd, the Kubernetes API, the various types of controllers, and the Kubelet. In addition, we'll discuss the broad ecosystem of projects and technologies that make Kubernetes usable within the Enterprise, and across multiple cloud environments.
The overall evolution towards microservices has caused a lot of IT leaders to radically rethink architectures and platforms. One can hardly keep up with the rapid onslaught on new distributed technologies. The same people who just asked yesterday "how can we deploy Docker containers?", are now asking "how can we operate Kubernetes-as-a-Service on-premise?", and are about to start asking "how can we operate the open source frameworks of our choice, such as Spark, TensorFlow, HDFS, and more, as a service across hybrid clouds?”. This session will discuss: Challenges of orchestrating and operating.
The overall evolution towards microservices has caused a lot of IT leaders to radically rethink architectures and platforms. One can hardly keep up with the rapid onslaught on new distributed technologies. The same people who just asked yesterday "how can we deploy Docker containers?", are now asking "how can we operate Kubernetes-as-a-Service on-premise?", and are about to start asking "how can we operate the open source frameworks of our choice, such as Spark, TensorFlow, HDFS, and more, as a service across hybrid clouds?”. This session will discuss: Challenges of orchestrating and operating
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
2. OPENSHIFT CONTAINER PLATFORM | Technical Value
2
Self-Service
Multi-language
Automation
Collaboration
Multi-tenant
Standards-based
Web-scale
Open Source
Enterprise Grade
Secure
3. CONFIDENTIAL Customer facing
OpenShift 4 — Everything you need
Any infrastructure
Everything you need, out of the box
1. Fully integrated and automated architecture
2. Seamless Kubernetes deployment on any cloud or on-
premises environment
3. Fully automated installation, from cloud infrastructure
to OS to application services
4. One click platform and application updates
5. Auto-scaling of cloud resources
Cluster services
monitoring, showback,
registry, logging
Application services
middleware, functions,
ISV
Service mesh
Developer services
dev tools, automated
builds, CI/CD, IDE
Automated operations
Enterprise Linux CoreOS
Physical Virtual Private Public
Any infrastructure
CaaS PaaS | Faas
Best IT ops experience Best developer experience
certified
4. Value of OpenShift
OPENSHIFT CONTAINER PLATFORM | Functional Overview
4
Red Hat Enterprise Linux | RHEL CoreOS
Kubernetes
Automated Operations
Cluster Services
Monitoring, Logging,
Registry, Router,
Telemetry
Developer Services
Dev Tools, CI/CD,
Automated Builds, IDE
Application Services
Service Mesh, Serverless,
Middleware/Runtimes,
ISVs
CaaS PaaSBest IT Ops Experience Best Developer ExperienceFaaS
7. VIRTUAL MACHINES AND CONTAINERS
VIRTUAL MACHINES CONTAINERS
VM isolates the hardware Container isolates the process
VM
OS Dependencies
Kernel
Hypervisor
Hardware
App App App App
Container Host (Kernel)
Container
App
OS deps
Container
App
OS deps
Container
App
OS deps
Container
App
OS deps
Hypervisor
Hardware
11. OpenShift Concepts
11
an image repository contains all versions of
an image in the image registry
IMAGE REGISTRY
frontend:latest
frontend:2.0
frontend:1.1
frontend:1.0
mongo:latest
mongo:3.7
mongo:3.6
mongo:3.4
myregistry/frontend myregistry/mongo
IMAGE
IMAGE
IMAGE
IMAGE
IMAGE
IMAGE
IMAGE
IMAGE
12. OpenShift Concepts
12
containers are wrapped in pods which are
units of deployment and management
POD
CONTAINER
10.140.4.44
POD
CONTAINER
10.15.6.55
CONTAINER
13. OpenShift Concepts
13
ReplicationControllers &
ReplicaSets ensure a specified number of
pods are running at any given time
image name
replicas
labels
cpu
memory
storage
ReplicaSet
ReplicationController
POD
CONTAINER
POD
CONTAINER ...
POD
CONTAINER
1 2 N
15. OpenShift Concepts
15
a daemonset ensures that all
(or some) nodes run a copy of a pod
foo = bar
Node
image name
replicas
labels
cpu
memory
storage
DaemonSet
foo = bar
Node
foo = baz
Node
POD
CONTAINER
POD
CONTAINER
✓ ✓
16. Dev
OpenShift Concepts
16
configmaps allow you to decouple
configuration artifacts from image content
appconfig.conf
MYCONFIG=true
ConfigMap
POD
CONTAINER
Prod
appconfig.conf
MYCONFIG=false
ConfigMap
POD
CONTAINER
17. OpenShift Concepts
17
secrets provide a mechanism to hold
sensitive information such as passwords
Dev
hash.pw
ZGV2Cg==
ConfigMap
POD
CONTAINER
Prod
hash.pw
cHJvZAo=
ConfigMap
POD
CONTAINER
18. 18
OPENSHIFT & KUBERNETES CONCEPTS
services provide internal load-balancing and
service discovery across pods
POD
SERVICE
“backend
”
CONTAINER
10.110.1.11
role:
backend
POD
CONTAINER
10.120.2.22
role:
backend
POD
CONTAINER
10.130.3.33
role:
backend
POD
CONTAINER
10.140.4.44
role:
frontend
role:
backend
19. 19
OPENSHIFT & KUBERNETES CONCEPTS
apps can talk to each other via services
POD
SERVICE
“backend
”
CONTAINER
10.110.1.11
role:
backend
POD
CONTAINER
10.120.2.22
role:
backend
POD
CONTAINER
10.130.3.33
role:
backend
POD
CONTAINER
10.140.4.44
role:
frontend
role:
backend
20. OpenShift Concepts
20
routes make services accessible to clients outside
the environment via real-world urls
> curl http://app-prod.mycompany.com
POD
SERVICE
“frontend”
CONTAINE
R
role:
fronten
d
POD
CONTAINE
R
role:
fronten
d
POD
CONTAINE
R
role:
fronten
d
role:
frontend
ROUTE
app-prod.mycompany.com
21. OpenShift Concepts
21
projects isolate apps across environments,
teams, groups and departments
PAYMENT DEV
PAYMENT PROD
CATALOG
INVENTORY
POD
C
POD
C
POD
C
POD
C
POD
C
POD
C
POD
C
POD
C
POD
C
POD
C
POD
C
POD
C
39. Installation Paradigms
OPENSHIFT CONTAINER PLATFORM | Installation
39
Full Stack Automated
Simplified opinionated “Best
Practices” for cluster provisioning
Fully automated installation and
updates including host container
OS.
Pre-existing Infrastructure
Customer managed resources &
infrastructure provisioning
Plug into existing DNS and security
boundaries
OPENSHIFT CONTAINER PLATFORM HOSTED OPENSHIFT
Azure Red Hat OpenShift
Deploy directly from the Azure
console. Jointly managed by Red
Hat and Microsoft Azure engineers.
OpenShift Dedicated
Get a powerful cluster, fully
Managed by Red Hat engineers and
support.
41. Pre-existing Infrastructure Installation
OPENSHIFT CONTAINER PLATFORM | Installation
41
openshift-install deployed
Cloud Resources
RH CoreOS
OCP Cluster
OCP Cluster Resources
Control Plane
Cloud Resources
Worker Nodes
Customer deployed
User managed
Operator managed
Note: Control plane nodes
must run RHEL CoreOS!
RH CoreOSRHEL CoreOS RHEL 7
RHEL
CoreOS
42. Comparison of Paradigms
OPENSHIFT CONTAINER PLATFORM | Installation
42
Full Stack Automation Pre-existing Infrastructure
Build Network Installer User
Setup Load Balancers Installer User
Configure DNS Installer User
Hardware/VM Provisioning Installer User
OS Installation Installer User
Generate Ignition Configs Installer Installer
OS Support Installer: RHEL CoreOS User: RHEL CoreOS + RHEL 7
Node Provisioning / Autoscaling Yes Only for providers with OpenShift
Machine API support
43. 4.2 Supported Providers
OPENSHIFT PLATFORM
Generally AvailableProduct Manager: Katherine Dubé
Full Stack Automation (IPI) Pre-existing Infrastructure (UPI)
Bare Metal
* Support for full stack automated installs to pre-existing VPC &
subnets and deploying as private/internal clusters is planned for 4.3.
**
*
44. Full stack automated deployments of AWS, Azure, GCP & OSP!
OPENSHIFT PLATFORM
Generally AvailableProduct Manager: Katherine Dubé
$ ./openshift-install --dir ./demo create cluster
? SSH Public Key /Users/demo/.ssh/id_rsa.pub
? Platform azure
? azure subscription id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
? azure tenant id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
? azure service principal client id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
? azure service principal client secret *********************************
INFO Saving user credentials to "/Users/demo/.azure/osServicePrincipal.json"
? Region centralus
? Base Domain example.com
? Cluster Name demo
? Pull Secret [? for help] *************************************************************
INFO Creating infrastructure resources…
INFO Waiting up to 30m0s for the Kubernetes API at https://api.demo.example.com:6443...
INFO API v1.14.0+4788f50 up
INFO Waiting up to 30m0s for bootstrapping to complete...
INFO Destroying the bootstrap resources...
INFO Waiting up to 30m0s for the cluster at https://api.demo.example.com:6443 to
initialize...
INFO Waiting up to 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export
KUBECONFIG=/Users/demo/openshift-install/demo/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-
console.apps.demo.example.com
INFO Login to the console with user: kubeadmin, password: <password>
$ ./openshift-install --dir ./demo create cluster
? SSH Public Key /Users/demo/.ssh/id_rsa.pub
? Platform gcp
? Service Account (absolute path to file or JSON content)
/Users/demo/.secrets/ServiceAccount.json
INFO Saving the credentials to "/Users/demo/.gcp/osServiceAccount.json"
? Project ID openshift-gce-devel
? Region centralus
? Base Domain example.com
? Cluster Name demo
? Pull Secret [? for help] *************************************************************
INFO Creating infrastructure resources…
INFO Waiting up to 30m0s for the Kubernetes API at https://api.demo.example.com:6443...
INFO API v1.14.0+4788f50 up
INFO Waiting up to 30m0s for bootstrapping to complete...
INFO Destroying the bootstrap resources...
INFO Waiting up to 30m0s for the cluster at https://api.demo.example.com:6443 to
initialize...
INFO Waiting up to 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export
KUBECONFIG=/Users/demo/openshift-install/demo/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-
console.apps.demo.example.com
INFO Login to the console with user: kubeadmin, password: <password>
Simplified Cluster Creation
Easily provision a “best practices” OpenShift cluster on Microsoft Azure
● CLI-based installer with interactive guided workflow
● Installer takes care of provisioning the underlying Infrastructure significantly
reducing deployment complexity
Faster Install
The installer typically finishes within 30 minutes
● Only minimal user input needed with all non-essential install config options now
handled by component operator CRD’s
● Leverages RHEL CoreOS for all node types enabling full stack automation of
installation and updates of both platform and host OS content
45. Deploy to pre-existing infrastructure for AWS, Bare Metal, GCP, & VMware!
OPENSHIFT PLATFORM
Generally AvailableProduct Manager: Katherine Dubé
Customized OpenShift Deployments
Enables OpenShift to be deployed to user managed resources and
pre-existing infrastructure.
● Customers are responsible for provisioning all infrastructure
objects including networks, load balancers, DNS, hardware/VMs
and performing host OS installation
● Deployments can be performed both on-premise and to the
public cloud
● OpenShift installer handles generating cluster assets (such as
node ignition configs and kubeconfig) and aids with cluster
bring-up by monitoring for bootstrap-complete and cluster-
ready events
● Example native provider templates (AWS CloudFormation and
Google Deployment Manager) included to help with user
provisioning tasks for creating infrastructure objects
● While RHEL CoreOS is mandatory for the control plane, either
RHEL CoreOS or RHEL 7 can be used for the worker/infra nodes
$ cat ./demo/install-config.yaml
apiVersion: v1
baseDomain: example.com
compute:
- name: worker
replicas: 0
controlPlane:
name: master
...
$ ./openshift-install --dir ./demo create ignition-config
INFO Consuming "Install Config" from target directory
$ ./openshift-install --dir ./demo wait-for bootstrap-complete
INFO Waiting up to 30m0s for the Kubernetes API at
https://api.demo.example.com:6443...
INFO API v1.11.0+c69f926354 up
INFO Waiting up to 30m0s for the bootstrap-complete event...
$ ./openshift-install --dir ./demo wait-for cluster-ready
INFO Waiting up to 30m0s for the cluster at
https://api.demo.example.com:6443 to initialize...
INFO Install complete!
46. OPENSHIFT PLATFORM
Disconnected “Air-gapped” Installation & Upgrading
Generally AvailableProduct Manager: Katherine Dubé
Installation Procedure
● Mirror OpenShift content to local container registry in the disconnected environment
● Generate install-config.yaml: $ ./openshift-install create install-config --dir <dir>
○ Edit and add pull secret (PullSecret), CA certificate (additionalTrustBundle),
and image content sources (ImageContentSources) to install-config.yaml
● Set the OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE environment variable
during the creation of the ignition configs
● Generate the ignition configuration: $ ./openshift-install create ignition-configs --dir
<dir>
● Use the resulting ignition files to bootstrap the cluster deployment
Overview
● 4.2 introduces support for installing and updating OpenShift
clusters in disconnected environments
● Requires local Docker 2.2 spec compliant container registry to
host OpenShift content
● Designed to work with the user provisioned infrastructure
deployment method
○ Note: Will not work with Installer provisioned
infrastructure deployments
Admin
Local Container
Registry
Quay.io
Container
Registry
# mirror update image:
$ oc adm -a <secret_json> release mirror
--from=quay.io/<repo>/<release:version>
--to=<local registry>/<repo>
--to-release-image=<local registry>/<repo:version>
# provide cluster with update image to update to:
$ oc adm upgrade --to-mirror=<local repo:version>
Local Copy of
Update Image
Disconnected
OpenShift Cluster
Red Hat sourced
Update Image
Mirrored to
local registry
Cluster
updated locally
Customer Cluster
49. OPENSHIFT PLATFORM
Generally AvailableProduct Manager: Ben Breard
General Purpose OS Immutable container host
BENEFITS
WHEN TO USE
• 10+ year enterprise life cycle
• Industry standard security
• High performance on any infrastructure
• Customizable and compatible with wide
ecosystem of partner solutions
• Self-managing, over-the-air updates
• Immutable and tightly integrated with
OpenShift
• Host isolation is enforced via Containers
• Optimized performance on popular
infrastructure
When customization and integration with
additional solutions is required
When cloud-native, hands-free
operations are a top priority
Red Hat Enterprise Linux
50. Immutable Operating System
OPENSHIFT PLATFORM
Red Hat Enterprise Linux CoreOS is versioned with
OpenShift
CoreOS is tested and shipped in conjunction with the platform.
Red Hat runs thousands of tests against these configurations.
Red Hat Enterprise Linux CoreOS is managed by the
cluster
The Operating system is operated as part of the cluster, with
the config for components managed by Machine Config
Operator:
● CRI-O config
● Kubelet config
● Authorized registries
● SSH config
v4.1.6
v4.1.6
RHEL CoreOS admins are responsible for:
Nothing.
51. OPENSHIFT PLATFORM
Transactional updates ensure that the Red Hat
CoreOS is never altered during runtime. Rather it is
booted directly into an always “known good” version.
● Each OS update is versioned and tested as an
complete image.
● OS binaries (/usr) are read-only
● Updates encapsulated in container images
● file system and package layering available for
hotfixes and debugging
Transactional Updates via rpm-ostree
52. Provides cluster-level configuration, enables rolling upgrades,
and prevents drift between new and existing nodes. The MCO
is the heart of what makes RHCOS a kube-native operating
system.
Configure Kernel Arguments for the Cluster
● oc create -f 50-kargs.yaml
● oc edit mc/50-kargs
MCO can be paused to suspend operations
Provides control for when changes can be
deployed
Custom MachinePools can have inheritance
Enables MachineConfigs to scale
Machine Config Operator (MCO)
OPENSHIFT PLATFORM
Generally AvailableProduct Manager: Ben Breard
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role:
worker
name: 50-kargs
spec:
KernelArguments:
audit=1
audit_backlog_limit=8192
net.ifnames.prefix=net
53. OpenShift Architecture
53
A lightweight, OCI-compliant container runtime
Minimal and Secure
Architecture
Optimized for
Kubernetes
Runs any OCI-
compliant image
(including docker)
54. BROAD ECOSYSTEM OF WORKLOADS
CRI-O Support in OpenShift
CRI-O 1.13 Kubernetes 1.13 OpenShift 4.1
CRI-O 1.14 Kubernetes 1.14 OpenShift 4.2
CRI-O 1.12 Kubernetes 1.12 OpenShift 4.0
CRI-O tracks and versions identical to Kubernetes, simplifying support permutations
56. OpenShift Bootstrap Process: Self-Managed Kubernetes
OpenShift Installation
How to boot a self-managed cluster:
● OpenShift 4 is unique in that management extends all the way down to the operating system
● Every machine boots with a configuration that references resources hosted in the cluster it joins, enabling cluster to manage itself
● Downside is that every machine looking to join the cluster is waiting on the cluster to be created
● Dependency loop is broken using a bootstrap machine, which acts as a temporary control plane whose sole purpose is bringing up the permanent
control plane nodes
● Permanent control plane nodes get booted and join the cluster leveraging the control plane on the bootstrap machine
● Once the pivot to the permanent control plane takes place, the remaining worker nodes can be booted and join the cluster
Bootstrapping process step by step:
1. Bootstrap machine boots and starts hosting the remote resources required for master machines to boot.
2. Master machines fetch the remote resources from the bootstrap machine and finish booting.
3. Master machines use the bootstrap node to form an etcd cluster.
4. Bootstrap node starts a temporary Kubernetes control plane using the newly-created etcd cluster.
5. Temporary control plane schedules the production control plane to the master machines.
6. Temporary control plane shuts down, yielding to the production control plane.
7. Bootstrap node injects OpenShift-specific components into the newly formed control plane.
8. Installer then tears down the bootstrap node or if user-provisioned, this needs to be performed by the administrator.
61. Rolling Machine Updates
CLOUD-LIKE SIMPLICITY, EVERYWHERE
Generally Available
Single-click updates
● RHEL CoreOS version & config
● Kubernetes core components
● OpenShift cluster components
Configure how many machines can be unavailable
Set the “maxUnavailable” setting in the MachineConfigPool to
maintain high availability while rolling out updates.
The default is 1.
Machine Config Operator (MCO) controls updates
This is a DaemonSet that runs on all Nodes in the cluster. When
you upgrade with oc adm upgrade, the MCO executes these
changes.
Product Manager: Ben Breard
62. CLOUD-LIKE SIMPLICITY, EVERYWHERE
Generally AvailableProduct Manager: Duncan Hardie
Cloud API
● Provide a single view and control across
multiple cluster types
● Machine API:
○ Set up definitions via CRDs
○ Machine: a node
○ MachineSet: think ReplicaSet
○ Actuators roll definitions across
clusters
○ Nodes are drained before deletion
● Cluster Autoscaler: provide/remove
additional nodes on demand
● AWS (4.1), Azure/GCP (target 4.2), VMWare
(Future)
64. The Kubernetes Networking Model
OPENSHIFT SDN
64 Source:
https://kubernetes.io/docs/concepts/cluster-administration/networking/
https://github.com/containernetworking/cni/blob/master/SPEC.md
https://github.com/containernetworking/cni
Container addressability
All containers get a unique cluster-
wide IP address.
Topological Simplicity
The Kubernetes cluster network is flat.
All pods can address each other and
Kubernetes' services directly without
NAT.
Integration
Agents running on a Kubernetes host
can address pods with their logical IP
address.
Container ports can be mapped directly
to host ports.
65. OpenShift SDN: Simple View
OPENSHIFT SDN
65
NODE
172.16.1.10
POD
10.1.2.2
POD
10.1.2.4
NODE
172.16.1.20
POD
10.1.4.2
POD
10.1.4.4
Physical Network
Overlay Network
67. How can we get traffic into an OpenShift cluster?
GETTING TRAFFIC INTO THE CLUSTER
67
Route / Ingress
Standard method
Traffic enters through OpenShift "Router"
Supports web traffic
Node Port
Useful for non-web protocols
Exposes port on every cluster host
External IP
Uses a static IP address assigned to cluster
hosts
Traffic bound to that IP is proxied to the
workload
Must manually track IP addresses
68. GETTING TRAFFIC INTO THE CLUSTER
68
SERVICE
POD POD
"ROUTER" / INGRESS CONTROLLER
EXTERNAL TRAFFIC
ENDPOINT LOOKUP
PROXIED CONNECTIONS
69. Node Port
GETTING TRAFFIC INTO THE CLUSTER
69
NodePort binds a service to a unique port on all the nodes
Traffic received on any node redirects to a node with the
running service
Ports in 30K-60K range which usually differs from the
service
Firewall rules must allow traffic to all nodes on the specific
port NODE
192.10.0.12
NODE
192.10.0.11
NODE
192.10.0.10
SERVICE
INT IP: 172.1.0.20:90
POD
10.1.0.1:90
POD
10.1.0.2:90
POD
10.1.0.3:90
connect
192.10.0.10:31421
192.10.0.11:31421
192.10.0.12:31421
CLIENT
70. External IP
GETTING TRAFFIC INTO THE CLUSTER
70
An IP address is associated with the service and assigned to
the underlying cluster host
Incoming traffic bound for that ip is proxied to the service
The service in turn proxies that traffic to its backing pods
Major drawback is manual bookkeeping of IP addresses
NODE
192.10.0.10 192.10.0.11
SERVICE
EXTERNAL IP: 192.10.0.11
POD
10.1.0.1:90
POD
10.1.0.2:90
connect
192.10.0.11:8443
CLIENT
71. Services Select Pods by Label
SERVICE
app=payroll role=frontend
POD
app=payroll
role=frontend
POD
app=payroll
role=frontend
Name: payroll-frontend
IP: 172.10.1.23
Port: 8080
POD
app=payroll
role=backendversion=1.0 version=1.0
GETTING TRAFFIC INTO THE CLUSTER
71
72. Services Select Pods by Label
SERVICE
app=payroll role=frontend
POD
app=payroll
role=frontend
POD
app=payroll
role=frontend
POD
app=payroll
role=frontend
Name: payroll-frontend
IP: 172.10.1.23
Port: 8080
POD
app=payroll
role=backendversion=2.0 version=1.0 version=1.0
GETTING TRAFFIC INTO THE CLUSTER
72
73. OpenShift Route vs Kubernetes Ingress
GETTING TRAFFIC INTO THE CLUSTER
73
Feature Ingress on OpenShift Route on OpenShift
Standard Kubernetes object X
External access to services X X
Persistent (sticky) sessions X X
Load-balancing strategies X X
Rate-limit and throttling X X
IP whitelisting X X
TLS edge termination for improved security X X
TLS re-encryption for improved security X
TLS passthrough for improved security X
Multiple weighted backends (split traffic) X
Generated pattern-based hostnames X
Wildcard domains X
Source:
https://blog.openshift.com/kubernetes-ingress-vs-openshift-route/
74. Routes can split traffic
SERVICE A
App A App A
SERVICE B
App B App B
ROUTE
10% traffic90% traffic
A/B Testing
Blue/Green
Canary Deployments
GETTING TRAFFIC INTO THE CLUSTER
74
75. apiVersion: v1
kind: Route
metadata:
name: host-route
spec:
host: www.example.com
to:
kind: Service
name: service-name
Route YAML Object
The www.example.com DNS name must resolve
to the router
Router then directs traffic to the pods backing
the service named service-name
GETTING TRAFFIC INTO THE CLUSTER
75
76. apiVersion: v1
kind: Service
metadata:
name: docker-registry
spec:
selector:
docker-registry: default
clusterIP: 172.30.136.123
ports:
- port: 5000
protocol: TCP
targetPort: 5000
Service YAML Object
Selects pods based on label
Serves as single IP and DNS for groups of pods
Serves as simple load balancer
GETTING TRAFFIC INTO THE CLUSTER
76
78. By Default Pod Traffic gets NAT'ed to the Host IP
GETTING TRAFFIC OUT OF THE CLUSTER
78
NODE 1
IP 1
NODE 2
IP 2
PROJECT B
PROJECT A
EXTERNAL
SERVICE
Whitelist: IP1
POD
POD
POD
✓
80. OpenShift Network Plugins
GETTING TRAFFIC AROUND THE CLUSTER
80
NODE
PO
D
PO
D
PO
D
PO
D
NODE
PO
D
PO
D
PO
D
PO
D
PROJECT A PROJECT B
DEFAULT NAMESPACE
✓
PROJECT C
Multitenant Network
Subnet
All pods can communicate with all other pods
Multitenant
Project level isolation.
Network Policy (Default)
Granular policy based isolation
81. Network Policy
GETTING TRAFFIC AROUND THE CLUSTER
81
PROJECT A
POD
POD
POD
POD
PROJECT B
POD
POD
POD
POD
Example Policies
Allow all traffic inside the project
Allow traffic from green to gray
Allow traffic to purple on 8080
✓
✓
8080
5432
✓
apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
name: allow-to-purple-on-8080
spec:
podSelector:
matchLabels:
color: purple
ingress:
- ports:
- protocol: tcp
port: 8080
✓
83. OpenShift SDN: Less Simple View
OPENSHIFT SDN REPRISE
83
Physical Network
br0
Open vSwitch Bridge
POD
10.1.4.2
eth0
NETNS
POD
10.1.4.4
eth0
NETNS
veth veth
tun0
iptables
eth0
vxlan
br0
Open vSwitch Bridge
POD
10.1.4.2
eth0
NETNS
POD
10.1.4.4
eth0
NETNS
veth veth
tun0
iptables
eth0
vxlan
NODE
172.16.1.10
NODE
172.16.1.20
84. Container processes isolated by kernel namespacing
OPENSHIFT SDN REPRISE
84
POD
10.1.4.2
NETNS
POD
10.1.4.4
NETNS
POD
10.1.4.2
NETNS
POD
10.1.4.4
NETNS
85. Network traffic exits namespaces through veth pairs
OPENSHIFT SDN REPRISE
85
POD
10.1.4.2
eth0
NETNS
POD
10.1.4.4
eth0
NETNS
veth veth
POD
10.1.4.2
eth0
NETNS
POD
10.1.4.4
eth0
NETNS
veth veth
86. Open vSwitch bridge routes traffic
OPENSHIFT SDN REPRISE
86
br0
Open vSwitch Bridge
eth0
veth veth
eth0
br0
Open vSwitch Bridge
veth veth
eth0 eth0
87. Cluster traffic exits host using vxlan interface
OPENSHIFT SDN REPRISE
87
br0
Open vSwitch Bridge
eth0
veth veth
eth0
br0
Open vSwitch Bridge
veth veth
eth0 eth0
eth0
vxlan
eth0
vxlan
88. Outbound traffic exits using tunnel interface and iptables
OPENSHIFT SDN REPRISE
88
br0
Open vSwitch Bridge
eth0
veth veth
eth0
br0
Open vSwitch Bridge
veth veth
eth0 eth0
eth0 eth0
tun0
iptables
tun0
iptables
89. Packet Flow: Pod to Pod, Same Host
OPENSHIFT SDN REPRISE
89
NODE
POD 1
veth0
10.1.15.2/24
br0
10.1.15.1/24
192.168.0.100
eth0
POD 2
veth1
10.1.15.3/24
vxlan0
90. Packet Flow: Pod to Pod, Different Host
OPENSHIFT SDN REPRISE
90
NODE 2
NODE 1
POD 1
veth0
10.1.15.2/24
br0
10.1.15.1/24
vxlan0
POD 2
veth0
10.1.20.2/24
br0
10.1.20.1/24
vxlan0
192.168.0.100
eth0
192.168.0.200
eth0
91. Packet Flow: Pod to External Host
OPENSHIFT SDN REPRISE
91
Container to Container on Different HostsNODE 1
POD 1
veth0
10.1.15.2/24
br0
10.1.15.1/24
tun0
192.168.0.100
External
Host
eth0
93. The OpenShift "Router"
GETTING TRAFFIC INTO THE CLUSTER REPRISE
93
Deployed as a Pod
HAProxy instances deployed as pods on compute hosts
Bound to host ports 443/80
All traffic bound for cluster workloads enters through "Router"
Maps FQDN to a Service
Host header is used to determine where to proxy traffic
Dynamically configured
Continually monitors cluster state and reconfigures itself
94. The OpenShift "Router"
GETTING TRAFFIC INTO THE CLUSTER REPRISE
94
RHCOS
NODE
MASTER
API/AUTHENTICATION
DATA STORE
Monitors for changes
CLIENT
80
443
*.apps.ocp4.example.com
or
myapp.example.com
RHCOS
NODE
Request proxied to pod over SDN
RHCOS
95. An OpenShift Service
GETTING TRAFFIC INTO THE CLUSTER REPRISE
95
Pod IP and Service IP stored in etcd
Generated when objects are created
DNS name for service stored in cluster DNS
Route and pod lookups resolve to service ip
Kubelet and kube-proxy modify cluster host iptables rules
IPTables DNAT rules map service ips to pod ips
96. An OpenShift Service
GETTING TRAFFIC INTO THE CLUSTER REPRISE
96
RHCOS
NODE
MASTER
API/AUTHENTICATION
DATA STORE
Kube-proxy monitors for changes
RHCOS
NODE
IPTables DNAT rule from
service IP to pod IP
RHCOS
IPTABLES
KUBE-PROXY
DNS
Pod resolves service DNS
98. ● Security in the RHEL host applies to the
container
● SELINUX and Kernel Namespaces are the
one-two punch no one can beat
● Protects not only the host, but containers
from each other
● Common Criteria cert - including container
framework
SECURITY
Container Security starts with Linux Security
Because Containers start with Linux, Red
Hat’s containers inherit RHEL’s industry
leading security practices and reputation.
CONTAINER CONTAINER
LINUX CONTAINER HOST (KERNEL)
LINUX O/S
DEPENDENCY
LINUX O/S
DEPENDENCY
APP APP
KUBERNETES KUBELET
SELINUX
NAMESPACES
Identity AUDIT/LOGS
SECCOMP
SVIRT
CGROUPS
99. Container Host Security
RHEL CoreOS
Minimal Only what’s needed to run containers
Secure Read-only and locked down
Immutable Immutable image-based deployments and updates
Always up-to-date OS updates are automated and transparent
Updates never break apps Isolates all applications as containers
Updates never break clusters OS components are compatible with the cluster
Supported on infra of choice Inherits majority of the RHEL ecosystem
Simple to configure Installer generated configuration
Effortless to manage Managed by Kubernetes Operators
100. Value of SELinux and OpenShift security profiles
100
Longer blog article on this topic https://www.redhat.com/en/blog/it-starts-linux-how-red-hat-helping-counter-linux-container-security-flaws
Issue: Vulnerability , (CVE-2019-5736) “ Execution of malicious containers allows for container
escape and access to host filesystem“
Red Hat protection: This vulnerability is mitigated on Red Hat Enterprise Linux if SELinux is in
enforcing mode. SELinux in enforcing mode is a pre-requisite for OpenShift Container Platform
and the default seccomp security profiles. Seccomp (secure computing mode) is used to restrict
the set of system calls applications can make, allowing cluster administrators greater control
over the security of workloads running in OpenShift Container Platform.
102. 102
Certificates and Certificate Management
OPENSHIFT SECURITY | Comprehensive features
● OpenShift provides its own internal CA
● Certificates are used to provide secure
connections to
○ master (APIs) and nodes
○ Ingress controller and registry
○ etcd
● Certificate rotation is automated
● Optionally configure external endpoints to
use custom certificates
MASTER✓
NODES✓
INGRESS
CONTROLLER✓
CONSOLE✓
ETCD✓
REGISTRY✓
103. Configuring an Identity Provider
OPENSHIFT PLATFORM
Generally Available
The Cluster Authentication Operator
● Use the cluster-authentication-operator to configure
an Identity Provider. The configuration is stored in the
oauth/cluster custom resource object inside the
cluster.
● Once that’s done, you may choose to remove
kubeadmin (warning: there’s no way to add it back).
● All the identity providers supported in 3.11 are
supported in 4.1: LDAP, GitHub, GitHub Enterprise,
GitLab, Google; OpenID Connect, HTTP request
headers (for SSO), Keystone, Basic authentication.
● For more information:
Understanding identity provider configuration
cluster-authentication-operator
Product Manager: Kirsten Newcomer
105. 105
Fine-Grained RBAC
OPENSHIFT SECURITY | Comprehensive features
● Project scope & cluster scope
available
● Matches request attributes
(verb,object,etc)
● If no roles match, request is
denied ( deny by default )
● Operator- and user-level
roles are defined by default
● Custom roles are supported
107. OPENSHIFT MONITORING | Solution Overview
107
OpenShift Cluster Monitoring
Metrics collection and
storage via Prometheus, an
open-source monitoring
system time series database.
Metrics visualization via
Grafana, the leading metrics
visualization technology.
Alerting/notification via
Prometheus’ Alertmanager, an
open-source tool that handles
alerts send by Prometheus.
111. Observability via
log exploration and corroboration with EFK
OPENSHIFT LOGGING | Solution Overview
Components
○ Elasticsearch: a search and analytics engine to store logs
○ Fluentd: gathers logs and sends to Elasticsearch.
○ Kibana: A web UI for Elasticsearch.
Access control
○ Cluster administrators can view all logs
○ Users can only view logs for their projects
Ability to forward logs elsewhere
○ External elasticsearch, Splunk, etc
111
116. OPENSHIFT STORAGE
Storage Focus
● Cluster Storage Operator
○ Sets up the default storage class
○ Looks through cloud provider and sets up the
correct storage class
● Drivers themselves remain in-tree for now, CSI
versions to follow later
● New GA storage in 4.2
○ Local Volume
○ Raw Block
■ Cloud providers (AWS, GCP, Azure,
vSphere)
■ Local Volume
STORAGE DEVICES
Supported
AWS EBS iSCSI
Azure File & Disk Fibre Channel
GCE PD HostPath
VMware vSphere Disk Local Volume NEW
NFS Raw Block NEW
117. PV Consumption
OPENSHIFT CONTAINER PLATFORM | Persistent Storage
Node
POD
CONTAINE
R
Claim
Z
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: z
PV
Kubelet
Storage
/foo/bar
119. Dynamic Storage Provisioning
OPENSHIFT CONTAINER PLATFORM | Persistent Storage
Admin
StorageClass
Claim
Z
2Gi RWX
Good
Bind
User
...
VolumeMount: Z
Pod Definition
Mount
Fast
NetApp Flash
Block
VMware VMDK
Good
NetApp SSD
Master
NetApp SSD
2Gi NFS
PV
Create
Map
POD
CONTAINE
R
120. OpenShift Container Storage 4.2
OPENSHIFT PLATFORM
● Complete Data Services: RWO, RWX & S3(new) (block, file & object)
● Persistent storage for all OCP Infra and Applications
● Build and deploy anywhere -Consistent Storage Consumption,
management, and operations
OCS 4.2 support with OCP 4.2
● Platform support: AWS and VMware
● Converged Mode support : Run as a service on OCP Cluster
● Consistent S3 across hybrid cloud
OCS 4.3
● Additional Platform: Bare Metal, Azure Cloud
● Independent Mode : Run OCS outside of OCP Cluster
● Hybrid and Multi-cloud S3
Persistent data services for OCP Hybrid Cloud
122. Red Hat Certified Operators
BROAD ECOSYSTEM OF WORKLOADS
Generally AvailableProduct Manager: Daniel Messer
STORAGE
SECURITY
DATABASE
DATA SERVICES
APM
DEVOPS
123. OperatorHub data sources
BROAD ECOSYSTEM OF WORKLOADS
Generally AvailableProduct Manager: Daniel Messer
Requires an online cluster
● For 4.1, the cluster must have connectivity to the internet
● Later 4.x releases will add offline capabilities
Operator Metadata
● Stored in quay.io
● Fetches channels and available versions for each Operator
Container Images
● Red Hat products and certified partners come from RHCC
● Community content comes from a variety of registries
124. Services ready for your developers
BROAD ECOSYSTEM OF WORKLOADS
Generally AvailableProduct Manager: Daniel Messer
New Developer Catalog aggregates apps
● Blended view of Operators, Templates and Broker
backed services
● Operators can expose multiple CRDs. Example:
○ MongoDBReplicaSet
○ MongoDBSharded Cluster
○ MongoDBStandalone
● Developers can’t see any of the admin screens
Self-service is key for productivity
● Developers with access can change settings and test out
new services at any time
125. Operator Deployment
Custom Resource
Definitions
RBAC
API Dependencies
Update Path
Metadata
Operators as a First-Class Citizen
125
YourOperator v1.1.2
Bundle
OPERATOR
LIFECYCLE MANAGER
Deployment
Role
ClusterRole
RoleBinding
ClusterRoleBinding
ServiceAccount
CustomResourceDefinition
BROAD ECOSYSTEM OF WORKLOADS
Product Manager: Daniel Messer Generally Available
126. Operator Lifecycle Management
126
OPERATOR
LIFECYCLE MANAGER
YourOperator v1.1.2
YourOperator v1.1.3
YourOperator v1.2.0
YourOperator v1.2.2
Subscription for
YourOperator
Time
VersionOperator Catalog
BROAD ECOSYSTEM OF WORKLOADS
Product Manager: Daniel Messer Generally Available
127. Operator Lifecycle Management
127
OPERATOR
LIFECYCLE MANAGER
YourOperator v1.1.2
YourOperator v1.1.3
YourOperator v1.2.0
YourOperator v1.2.2
Subscription for
YourOperator
Time
Version
YourApp v3.0
YourApp v3.1
Y
Operator Catalog
BROAD ECOSYSTEM OF WORKLOADS
Product Manager: Daniel Messer Generally Available
128. Build Operators for your apps
BROAD ECOSYSTEM OF WORKLOADS
Generally AvailableProduct Manager: Daniel Messer
Ansible SDKHelm SDK Go SDK
Helm Chart
Ansible Playbooks
APBs
Build operators from
Helm chart, without any
coding
Build operators from
Ansible playbooks and
APBs
Build advanced operators
for full lifecycle
management
OPERATOR
SDK