Here are the key steps to create an application from the catalog in the OpenShift web console:
1. Click on "Add to Project" on the top navigation bar and select "Browse Catalog".
2. This will open the catalog page showing available templates. You can search for a template or browse by category.
3. Select the template you want to use, for example Node.js.
4. On the next page you can review the template details and parameters. Fill in any required parameters.
5. Click "Create" to instantiate the template and create the application resources in your current project.
6. OpenShift will then provision the application, including building container images if required.
Kubernetes 101 - an Introduction to Containers, Kubernetes, and OpenShiftDevOps.com
Administrators and developers are increasingly seeking ways to improve application time to market and improve maintainability. Containers and Red Hat® OpenShift® have quickly become the de facto solution for agile development and application deployment.
Red Hat Training has developed a course that provides the gateway to container adoption by understanding the potential of DevOps using a container-based architecture. Orchestrating a container-based architecture with Kubernetes and Red Hat® OpenShift® improves application reliability and scalability, decreases developer overhead, and facilitates continuous integration and continuous deployment.
In this webinar, our expert will cover:
An overview of container and OpenShift architecture.
How to manage containers and container images.
Deploying containerized applications with Red Hat OpenShift.
An outline of Red Hat OpenShift training offerings.
A basic introduction to Kubernetes. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
In this session, Diógenes gives an introduction of the basic concepts that make OpenShift, giving special attention to its relationship with Linux containers and Kubernetes.
Kubernetes 101 - an Introduction to Containers, Kubernetes, and OpenShiftDevOps.com
Administrators and developers are increasingly seeking ways to improve application time to market and improve maintainability. Containers and Red Hat® OpenShift® have quickly become the de facto solution for agile development and application deployment.
Red Hat Training has developed a course that provides the gateway to container adoption by understanding the potential of DevOps using a container-based architecture. Orchestrating a container-based architecture with Kubernetes and Red Hat® OpenShift® improves application reliability and scalability, decreases developer overhead, and facilitates continuous integration and continuous deployment.
In this webinar, our expert will cover:
An overview of container and OpenShift architecture.
How to manage containers and container images.
Deploying containerized applications with Red Hat OpenShift.
An outline of Red Hat OpenShift training offerings.
A basic introduction to Kubernetes. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
In this session, Diógenes gives an introduction of the basic concepts that make OpenShift, giving special attention to its relationship with Linux containers and Kubernetes.
Author: Oleg Chunikhin, www.eastbanctech.com
Kubernetes is a portable open source system for managing and orchestrating containerized cluster applications. Kubernetes solves a number of DevOps related problems out of the box in a simple and unified way – rolling updates and update rollback, canary deployment and other complicated deployment scenarios, scaling, load balancing, service discovery, logging, monitoring, persistent storage management, and much more. You will learn how in less than 30 minutes a reliable self-healing production-ready Kubernetes cluster may be deployed on AWS and used to host and operate multiple environments and applications.
Slides from OpenSource101.com Talk (https://opensource101.com/sessions/wtf-is-gitops-why-should-you-care/)
If you’re interested in learning more about Cloud Native Computing or are already in the Kubernetes community you may have heard the term GitOps. It’s become a bit of a buzzword, but it’s so much more! The benefits of GitOps are real – they bring you security, reliability, velocity and more! And the project that started it all was Flux – a CNCF Incubating project developed and later donated by Weaveworks (the GitOps company who coined the term).
Pinky will share from personal experience why GitOps has been an essential part of achieving a best-in-class delivery and platform team. Pinky will give a brief overview of definitions, CNCF-based principles, and Flux’s capabilities: multi-tenancy, multi-cluster, (multi-everything!), for apps and infra, and more.
Pinky will cover a little of Flux’s microservices architecture and how the various components deliver this robust, secure, and trusted open source solution. Through the components of the Flux project, users today are enjoying compatibility with Helm, Jenkins, Terraform, Prometheus, and more as well as with cloud providers such as AWS, Azure, Google Cloud, and more.
Join us for this informative session and get all of your GitOps questions answered by an end user in the community!
Speaker: Priyanka (aka “Pinky”) is a Developer Experience Engineer at Weaveworks. She has worked on a multitude of topics including front end development, UI automation for testing and API development. Previously she was a software developer at State Farm where she was on the delivery engineering team working on GitOps enablement. She was instrumental in the multi-tenancy migration to utilize Flux for an internal Kubernetes offering. Outside of work, Priyanka enjoys hanging out with her husband and two rescue dogs as well as traveling around the globe.
CI/CD Best Practices for Your DevOps JourneyDevOps.com
The journey to realizing DevOps in any organization is fraught with a number of obstacles for developers and other stakeholders. These challenges are often caused by key CI/CD practices being misunderstood, partially implemented or even completely skipped. Now, as the industry positions itself to build on DevOps practices with a Software Delivery Management strategy, it’s more important than ever that we implement CI/CD best practices, and prepare for the future.
Join host Mitchell Ashely, and CloudBees’ Brian Dawson, DevOps evangelist, and Doug Tidwell, technical marketing director, as they explore and review the CI/CD best practices which serve as your stepping stones to DevOps and a successful Software Delivery Management strategy.
The webinar will cover CI/CD best practices including:
Containers and environment management
Continuous delivery or deployment
Movement from Dev to Ops
By the end of the webinar, you’ll understand the key steps for implementing CI/CD and powering your journey to DevOps and beyond.
Helm - Application deployment management for KubernetesAlexei Ledenev
Use Helm to package and deploy a composed application to any Kubernetes cluster. Manage your releases easily over time and across multiple K8s clusters.
Designing a complete ci cd pipeline using argo events, workflow and cd productsJulian Mazzitelli
https://www.youtube.com/watch?v=YmIAatr3Who
Presented at Cloud and AI DevFest GDG Montreal on September 27, 2019.
Are you looking to get more flexibility out of your CICD platform? Interested how GitOps fits into the mix? Learn how Argo CD, Workflows, and Events can be combined to craft custom CICD flows. All while staying Kubernetes native, enabling you to leverage existing observability tooling.
OpenShift is Red Hat's Platform-as-a-Service (PaaS) that lets developers quickly develop, host, and scale Docker container-based applications. OpenShift enables a uniform and standardised approach to container management across all hosting options including AWS/EC2 and other private/public cloud and on/off-premise variants. At this session, you will learn how Red Hat's enterprise clients are using OpenShift to enable their digital transformation initiatives. Examples will cover how realising a hybrid cloud strategy can simplify and reduce the risk of migrating and transitioning application workloads to containers in the cloud.
Alex Smith, Solutions Architect, Amazon Web Services, ASEAN
Stephen Bylo, Senior Solution Architect, Red Hat Asia Pacific Pte Ltd
DevOps @ OpenShift Online
Presenter: Adam Miller
As the Release Engineer and a member of Operations team for OpenShift Online, a downstream consumer of OpenShift Origin and the largest Public implementation of OpenShift to date, Adam Miller will discuss what it's like behind the scenes at OpenShift.com and share lessons learned and bring his thoughts and feedback on the future direction of Origin.
GitOps è un nuovo metodo di CD che utilizza Git come unica fonte di verità per le applicazioni e per l'infrastruttura (declarative infrastructure/infrastructure as code), fornendo sia il controllo delle revisioni che il controllo delle modifiche. In questo talk vedremo come implementare workflow di CI/CD Gitops basati su Kubernetes, dalla teoria alla pratica passando in rassegna i principali strumenti oggi a disposizione come ArgoCD, Flux (aka Gitops engine) e JenkinsX
You have heard about containers and would like to see more than some hand waving and slideware. Well sit back and enjoy. We'll cover some basic vocabulary and tech for those who are new to the technology. From there on out, it will be all demos! Starting with just deploying a simple Docker image, we will work all the way up to a complete application and scale it on demand. You will leave a great taste of the technology Red Hat and Cisco will be bringing you to get your application development on the right track!
Author: Oleg Chunikhin, www.eastbanctech.com
Kubernetes is a portable open source system for managing and orchestrating containerized cluster applications. Kubernetes solves a number of DevOps related problems out of the box in a simple and unified way – rolling updates and update rollback, canary deployment and other complicated deployment scenarios, scaling, load balancing, service discovery, logging, monitoring, persistent storage management, and much more. You will learn how in less than 30 minutes a reliable self-healing production-ready Kubernetes cluster may be deployed on AWS and used to host and operate multiple environments and applications.
Slides from OpenSource101.com Talk (https://opensource101.com/sessions/wtf-is-gitops-why-should-you-care/)
If you’re interested in learning more about Cloud Native Computing or are already in the Kubernetes community you may have heard the term GitOps. It’s become a bit of a buzzword, but it’s so much more! The benefits of GitOps are real – they bring you security, reliability, velocity and more! And the project that started it all was Flux – a CNCF Incubating project developed and later donated by Weaveworks (the GitOps company who coined the term).
Pinky will share from personal experience why GitOps has been an essential part of achieving a best-in-class delivery and platform team. Pinky will give a brief overview of definitions, CNCF-based principles, and Flux’s capabilities: multi-tenancy, multi-cluster, (multi-everything!), for apps and infra, and more.
Pinky will cover a little of Flux’s microservices architecture and how the various components deliver this robust, secure, and trusted open source solution. Through the components of the Flux project, users today are enjoying compatibility with Helm, Jenkins, Terraform, Prometheus, and more as well as with cloud providers such as AWS, Azure, Google Cloud, and more.
Join us for this informative session and get all of your GitOps questions answered by an end user in the community!
Speaker: Priyanka (aka “Pinky”) is a Developer Experience Engineer at Weaveworks. She has worked on a multitude of topics including front end development, UI automation for testing and API development. Previously she was a software developer at State Farm where she was on the delivery engineering team working on GitOps enablement. She was instrumental in the multi-tenancy migration to utilize Flux for an internal Kubernetes offering. Outside of work, Priyanka enjoys hanging out with her husband and two rescue dogs as well as traveling around the globe.
CI/CD Best Practices for Your DevOps JourneyDevOps.com
The journey to realizing DevOps in any organization is fraught with a number of obstacles for developers and other stakeholders. These challenges are often caused by key CI/CD practices being misunderstood, partially implemented or even completely skipped. Now, as the industry positions itself to build on DevOps practices with a Software Delivery Management strategy, it’s more important than ever that we implement CI/CD best practices, and prepare for the future.
Join host Mitchell Ashely, and CloudBees’ Brian Dawson, DevOps evangelist, and Doug Tidwell, technical marketing director, as they explore and review the CI/CD best practices which serve as your stepping stones to DevOps and a successful Software Delivery Management strategy.
The webinar will cover CI/CD best practices including:
Containers and environment management
Continuous delivery or deployment
Movement from Dev to Ops
By the end of the webinar, you’ll understand the key steps for implementing CI/CD and powering your journey to DevOps and beyond.
Helm - Application deployment management for KubernetesAlexei Ledenev
Use Helm to package and deploy a composed application to any Kubernetes cluster. Manage your releases easily over time and across multiple K8s clusters.
Designing a complete ci cd pipeline using argo events, workflow and cd productsJulian Mazzitelli
https://www.youtube.com/watch?v=YmIAatr3Who
Presented at Cloud and AI DevFest GDG Montreal on September 27, 2019.
Are you looking to get more flexibility out of your CICD platform? Interested how GitOps fits into the mix? Learn how Argo CD, Workflows, and Events can be combined to craft custom CICD flows. All while staying Kubernetes native, enabling you to leverage existing observability tooling.
OpenShift is Red Hat's Platform-as-a-Service (PaaS) that lets developers quickly develop, host, and scale Docker container-based applications. OpenShift enables a uniform and standardised approach to container management across all hosting options including AWS/EC2 and other private/public cloud and on/off-premise variants. At this session, you will learn how Red Hat's enterprise clients are using OpenShift to enable their digital transformation initiatives. Examples will cover how realising a hybrid cloud strategy can simplify and reduce the risk of migrating and transitioning application workloads to containers in the cloud.
Alex Smith, Solutions Architect, Amazon Web Services, ASEAN
Stephen Bylo, Senior Solution Architect, Red Hat Asia Pacific Pte Ltd
DevOps @ OpenShift Online
Presenter: Adam Miller
As the Release Engineer and a member of Operations team for OpenShift Online, a downstream consumer of OpenShift Origin and the largest Public implementation of OpenShift to date, Adam Miller will discuss what it's like behind the scenes at OpenShift.com and share lessons learned and bring his thoughts and feedback on the future direction of Origin.
GitOps è un nuovo metodo di CD che utilizza Git come unica fonte di verità per le applicazioni e per l'infrastruttura (declarative infrastructure/infrastructure as code), fornendo sia il controllo delle revisioni che il controllo delle modifiche. In questo talk vedremo come implementare workflow di CI/CD Gitops basati su Kubernetes, dalla teoria alla pratica passando in rassegna i principali strumenti oggi a disposizione come ArgoCD, Flux (aka Gitops engine) e JenkinsX
You have heard about containers and would like to see more than some hand waving and slideware. Well sit back and enjoy. We'll cover some basic vocabulary and tech for those who are new to the technology. From there on out, it will be all demos! Starting with just deploying a simple Docker image, we will work all the way up to a complete application and scale it on demand. You will leave a great taste of the technology Red Hat and Cisco will be bringing you to get your application development on the right track!
We are on the cusp of a new era of application development software: instead of bolting on operations as an after-thought to the software development process, Kubernetes promises to bring development and operations together by design.
Cloud Deployment of Data Harmony
Jeffrey Gordon, Lead Developer, Access Innovations, Inc.
Jeffrey will describe the cloud deployment of the Data Harmony software.
This presentation was made as closing session for Container Conference 2018 on 03rd August in Bangalore by Anoop Kumar from Docker.
"In this session we will get familiarized with the technical aspects of the Docker EE 2.0 Platform. It will involve a walkthrough of the swarm as well as the relatively newly introduced Kubernetes integrations, how it enables organizational agility, choice and security and the future roadmap of the product suite. We'll finally do a quick demo of the platform and close with a Q&A section."
The Application Server Platform of the Future - Container & Cloud Native and ...Lucas Jellema
New architecture patterns are rapidly influencing many organizations. The march to the cloud is taking place. DevOps and microservices for true agility and containers as vehicle for delivery, testing and management. During
Oracle OpenWorld 2017 - Oracle presented its vision and roadmap in the area of cloud native computing (which is based on container native) and announced its application server platform (container management runtime) of the future. This presentation summarizes that picture painted by Oracle.
An introduction to configuring Domino for DockerGabriella Davis
9.0.1 FP10 brings support for Domino on a docker platform. You may know that docker is a container solution but what does that mean and how could it affect your Domino infrstructure? In this session we'll review how to install and run Domino in a docker container, whether it can support external clustering and the decisions to consider when designing container architecture.
The Rise of the Container: The Dev/Ops Technology That Accelerates Ops/DevRobert Starmer
Understand the container environment, developer interest, and the basis of the container landscape, and learn how OpenStack can enable this new technology component, or leverage it!
Democratizing Fuzzing at Scale by Abhishek Aryaabh.arya
Presented at NUS: Fuzzing and Software Security Summer School 2024
This keynote talks about the democratization of fuzzing at scale, highlighting the collaboration between open source communities, academia, and industry to advance the field of fuzzing. It delves into the history of fuzzing, the development of scalable fuzzing platforms, and the empowerment of community-driven research. The talk will further discuss recent advancements leveraging AI/ML and offer insights into the future evolution of the fuzzing landscape.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSEDuvanRamosGarzon1
AIRCRAFT GENERAL
The Single Aisle is the most advanced family aircraft in service today, with fly-by-wire flight controls.
The A318, A319, A320 and A321 are twin-engine subsonic medium range aircraft.
The family offers a choice of engines
Courier management system project report.pdfKamal Acharya
It is now-a-days very important for the people to send or receive articles like imported furniture, electronic items, gifts, business goods and the like. People depend vastly on different transport systems which mostly use the manual way of receiving and delivering the articles. There is no way to track the articles till they are received and there is no way to let the customer know what happened in transit, once he booked some articles. In such a situation, we need a system which completely computerizes the cargo activities including time to time tracking of the articles sent. This need is fulfilled by Courier Management System software which is online software for the cargo management people that enables them to receive the goods from a source and send them to a required destination and track their status from time to time.
Quality defects in TMT Bars, Possible causes and Potential Solutions.PrashantGoswami42
Maintaining high-quality standards in the production of TMT bars is crucial for ensuring structural integrity in construction. Addressing common defects through careful monitoring, standardized processes, and advanced technology can significantly improve the quality of TMT bars. Continuous training and adherence to quality control measures will also play a pivotal role in minimizing these defects.
Vaccine management system project report documentation..pdfKamal Acharya
The Division of Vaccine and Immunization is facing increasing difficulty monitoring vaccines and other commodities distribution once they have been distributed from the national stores. With the introduction of new vaccines, more challenges have been anticipated with this additions posing serious threat to the already over strained vaccine supply chain system in Kenya.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
2. Red Hat Openshift Fundamentals
• Getting started with Red Hat’s Openshift Container Platform
• OpenShift makes it easier to deploy applications in an enterprise
environment
• Allowing developers to roll out applications as fully operational
containers
• Allows administrators to manage the application lifecycle in a flexible
way
• So applications can be monitored and scaled as needed
4. Requirements
Recommended that you are…
• Comfortable with Linux
• A bit of experience in containers & Kubernetes
Hardware Requirements:
Using MiniShift
Single VM that needs 4GB RAM & 20GB Disk Space
Full fledged OpenShift cluster
For running 3 node cluster, with 3 VM, requires 12GB of RAM and 80GB of
disk space
6. Lesson 1: Understanding OpenShift
Objectives:
• Understanding Containers & OpenShift
• Understanding Red Hat Container Management Solution
• Understanding OpenShift in a Container Environment
• Understanding OpenShift in a DevOps environment
• Understanding OpenShift Architecture
• OpenShift vs Kubernetes (which feature is similar, and which feature
is different)
• Understanding the role of OpenShift in Hybrid Cloud environment
8. Understanding OpenShift
• OpenShift Container Platform (OCP) allows developers to easily build
an environment based on source code that you insert into the system
• Using OpenShift allows developers to bring applications to market
without any delay
• OpenShift supports code written in many programming languages
• OpenShift is a PaaS solution that is built on top of Kubernetes
• The result is a container that will be orchestrated by the integrated
Kubernetes layer
11. Understanding Containers
• Containers are the modern-day replacement of applications that are
installed on servers
• Containers contain all dependencies that are required to run an
application and are started on top of a container engine
• Containers do not include a kernel, but run on the host OS kernel
• Docker is the most common container solution
• Docker engine is a common engine, but not the only one: example; in
RHEL 8, containers can run natively on top of the RHEL OS. It is still a
fast-moving technology that’s always subject to change.
12. PaaS?
• Platform as a service (PaaS) is a category of cloud computing services that
provides a platform allowing customers to develop, run, and manage
applications without the complexity of building and maintaining the
infrastructure typically associated with developing and launching an app
(Wikipedia)
• OpenShift is a PaaS solution that adds different PaaS features to a
Kubernetes/Docker environment
• Remote management
• Multitenancy
• Security
• Monitoring
• Application life-cycle management
• Auditing
13. Understanding Kubernetes
• Kubernetes is a portable, extensible open-source platform for
managing containerized workloads and services
• Containers needs to be orchestrated
• When running containers are running on an enterprise environment, you will
need a HA system, which needs to be orchestrated
• Created by Google, based on Google Borg, since 2014
• Kubernetes orchestrates computing, networking, and storage
infrastructure
• OpenShift is build on top of Kubernetes, so that OpenShift doesn’t
have to recreate everything, and currently Kubernetes is the de-facto
standard for container orchestration
15. Understanding Podman
• RHEL 8 includes Podman, a solution to run containers natively on top
of RHEL
• No need for Docker
• Podman is for stand-alone containers, and is useful to run individual
containers without any enterprise features
• If the host fail, the container will also fail, and no other host will take care of
the container
• Difference with Docker: Podman runs containers with random UID
and not as root
16. Containers Operating
System
• Containers can run on top of a full Linux
distribution
• For increased efficiency, it’s better to run
containers on top of a container OS
• Container Linux (formerly CoreOS) is a
container OS that was acquired by Red Hat
• Already integrated in OpenShift as a
container OS that has been developed for
a while
17. OpenShift
• OpenShift is a platform that integrates container management and
application builds in an enterprise platform
• OpenShift exists in different forms
• OKD (previously known as OpenShift Origin) – free
• OpenShift Container Platform – Red Hat Solution – commercial
• OpenShift online – Multitenant version of OpenShift with infrastructure
managed by Red Hat
• OpenShift on Public Cloud Platforms
• Azure
• AWS
• Google Cloud Platform
• IBM Cloud
19. Using OpenShift to Manage Containers
How do we manage containers?
• Kubernetes is the de facto standard for managing and orchestrating
containers
• OpenShift is not required for managing containers, but offers some
significant benefits over Kubernetes
• Strict security policies – much more secure than default Kubernetes
• Routers make it easier to access applications
• Better management of container images
• S2I – Source to Image; Developers can automatically build container from the source
code. Even can trigger a new build when the source code is changed.
21. Understanding CI/CD
• Continuous Integration (CI) is the integration of source code from multiple
authors into a shared source code management (SCM) repository
• Git is such an SCM repository
• Such environment supports multiple changes per day
• In OpenShift, Git push events can be captured and result in a new
containers that are automatically created
• The result is Continuous Delivery (CD), an environment where new versions
of the software are automatically deployed
• In the flow of CI/CD process, pipelines play important an important role
22. Understanding Pipelines
Pipelines are a representation of all steps in the CI/CD process
• Build
• Test
• Packaging
• Documentation
• Reporting
• Deployment
• Verification
Common Tools to work with Pipeline is Jenkins
23. Understanding OpenShift and DevOps
• For DevOps, using Infrastructure as Code is an important goal
• OpenShift goes beyond that, and offers a solution to automate the build of
containers, without needing to know anything about infrastructure
• Containers are a perfect solution to isolate the responsibilities of the
developers and operations teams
• To do so, Pipelines are integrated. Pipelines are a solution that allows
teams to automate and organize all activities required to deliver software
changes
• These pipelines are offered through integrated Jenkins Pipelines
• OpenShift supports all five stages of the DevOps application lifecycle
24. OpenShift and the DevOps Lifecycle
• Build: Developers can build applications quick and easy, without the need
for IT operations to set up anything
• Test: Continuous Integration (CI) is offered through built-in Jenkins CI
server and lets developers integrate code automatically with every change
• Operate: Continuous Delivery (CD) is offered using Pipelines to automate
every step of the application delivery
• Deploy: Auto-scaling features ensure that all times, the number of required
instance is available
• Monitor: Metrics, health check, and self healing ensure that the
environment stays healthy
27. Understanding OKD
• OpenShift is using the OKD Project as upstream
• OKD = OpenShift Kubernetes Distribution
• Kubernetes is an important part of OpenShift
• OKD is a distribution of Kubernetes optimized for continuous
application development and multi-tenant deployment. OKD adds
developer and operations-centric tools on top of Kubernetes to
enable rapid application development, easy deployment and scaling,
and long-term lifecycle maintenance for small and large teams. OKD is
the upstream Kubernetes distribution embedded in Red Hat
OpenShift. (okd.io)
28. Understanding OpenShift on Kubernetes
• OpenShift adds features on top of Kubernetes, but uses the core
Kubernetes infrastructure
• OpenShift adds resource types to the Kubernetes environment and
stores them in Etcd
• Most OpenShift services are implemented as Docker container
• OpenShift adds xPaaS, a middleware services that can be offered as
PaaS, by adding JBoss middleware solution
• xPaaS = aPaaS, iPaaS, bpmPaaS, dvPaaS, mPaaS + OpenShift
• Some Kubernetes resource types are not available in OpenShift
29. Understanding the Purpose
• Kubernetes focuses on providing container orchestration
• OpenShift adds features to that:
• A build strategy to build source code
• Built in container registry
• Version control integration
• Security
30. Shared Resource Types
• Kubernetes and OpenShift share some resource types:
• Pods
• Minimal entity that is managed in OpenShift or Kubernetes environment
• Typically contains a container
• OpenShift doesn’t run container by themselves, but in order to run container, OpenShift manages Pods
• Usually only contains one container, but it depends on the Microservices architecture
• Namespaces
• Called projects in OpenShift
• Provides a strictly isolated environment offered by the Linux kernel
• Impossible for pods running in one namespace to interfere pods that are running in different namespace
• Deployment Config
• The configuration file that defines the application
• One of the things it does is taking care of the replication, the number of instance of an application that you want to run
• Services
• Exposing the application to the outside world
• Persistent Volume and Volume Claims
• Used for setting up storage
• Persistent storage is the external storage that you want to use in OpenShift environment
• Volume claim is the claim that the deployment config use, and put in that persistent volume
• Volume claim allows the deployment config to tell the persistent storage, “hey I need 5GB”
• Secrets
• Solution to store secret information and connect that to the pod (API keys, password, SSH keys, etc)
31. OpenShift Resource Types
Some resources types are unique to OpenShift
• Images
• Product that delivered by Source To Image
• In Kubernetes, usually the image is coming from Docker, or a manually created image
• OpenShift integrates the image build process
• Image Streams
• A tagged reference to image; tag can be used to assign new version numbers, etc
• Templates
• Allows you to run application in a standardized way
• Build Config
• How configuration is built in OpenShift environment
• Routes
• Solution that allows you to create a DNS, FQDN, which can be used to access the application publicly (over the
Internet, internal network, etc)
• No such thing as this resource type in Kubernetes
33. Understanding Hybrid Cloud
• Hybrid Cloud is a cloud that combines different types of cloud services
• This can be a private cloud vs public cloud
• But also IaaS cloud and PaaS cloud
• OpenShift is a hybrid cloud solution, as it allows you to run containers on
any IaaS cloud solution
• The IaaS cloud is a solution managing large infrastructure
• OpenShift is the solution to easily deploy an application on top of that
Infrastructure
• In an OpenShift context, the Hybrid Cloud provides ultimate flexibility by
combining containers and IaaS cloud
34. Understanding the IaaS Layer
• The IaaS layer offers flexibility in deploying an infrastructure
• OpenShift can be installed on a traditional physical data center
• But for more flexibility to scale up host machines in a dynamic and automated way, we need IaaS
cloud
• In IaaS, every part of the infrastructure can be automated
• Virtual machine
• Storage volume
• Subnets
• Firewalls
• If we install OpenShift on top of IaaS, we could have two layers of automation, in the
infrastructure level, and application level
• Automated deployment offers the flexibility that is required to easily scale up application
• With just IaaS, it’s difficult to have an automated application deployment. Only at the Infra level
35. Understanding the OpenShift Layer
• OpenShift allows developers to define an application in a simple
YAML file that will fetch the source code from a GitHub repository
• OpenShift on IaaS allows developers to focus on the application,
while ignoring the required underlying infrastructure
• Ansible can be used for full integration and automation: Ansible is the
solution for automation of everything
39. OpenShift Installation Options
• Red Hat OpenShift
• Licensed version of OpenShift, used by companies and enterprise
• Can be installed as an on-premise cluster
• Can be also installed in Public or Private Cloud
• OKD
• Community Supported
• Minishift (POC only)
• Nice way to get to learn OpenShift
• Only require 4GB of RAM
• OKD in a container: oc cluster up
• OKD in public or private cloud
• Install as an on-premise cluster
41. MiniShift Installation Options
• MiniShift is available for different operating systems
• You will need a hypervisor
• MacOS: xhyve
• Linux: KVM
• Windows: Hyper-V
• Cross Platform: VirtualBox
• Basically it’s a VM
43. Managing Minishift Addons
• Minishift, by default, has a couple of restrictions which make it so
certain security settings won’t work
• To make MiniShift more relaxed, you’ll need to enable some addons:
• minishift addon list - shows current addons
• minishift addon enable admin-user – creates a user with cluster admin
permissions
• minishift addon enable anyuid – allows you to login using any UID
• It makes more sense to use admin user in Minishift, since you will
need admin user for doing infrastructure related tasks, and it’s most
probably will be a single user environment
44. Installing the OpenShift Client
• The oc client is used on all types of installations
• Download the client software from www.okd.io
• Extract and copy the oc binary to /usr/local/bin or add to
environment variable
• After extracting, type oc or oc status to verify the command
availability
45. Add minishift & oc to environment variable
• Start > “Edit the system environment variable”
• Environment Variables…
46. Try some commands
• minishift addons list
• oc status
• oc whoami
• oc login –u developer –p anything
48. Understanding oc cluster up
• Running a couple of containers directly on top of docker
• Requirement: Docker CE and OpenShift client
• oc cluster up method uses Docker engine and the OpenShift client
utility to spin up a proof-of-concept cluster
• Use it as an alternative for Minishift
49. Using oc cluster up
• Always check the current version of the documentation
• Install docker-ce
• Edit file: /etc/docker/daemon.json
{
“insecure-registries”: [“172.30.0.0/16”]
}
• This is to allow running Docker registry in a private network
• systemctl daemon-reload; systemctl restart docker
• Disable the firewall
• Docker run nginx to create local config to start a random container
• Type sudo oc cluster up, takes about 10-15 minutes
• Check using docker ps
• Shutdown: oc cluster down
50. Lesson 3: Getting Started With OpenShift
• Getting Started with the Web Console
• Understanding Resource Types: Pods & Namespaces
• Understanding Resource Types: Deployment Configs & Networking
• Managing Resources from the Command Line
• Using Source-to-Image to Create Application
• Basic OpenShift Troubleshooting
52. Understanding Projects - 1
• OpenShift is oriented around the project
• An isolated environment
• Different items exist within project
• Applications: the container that is providing services
• Builds: the process that defines how to build the container from repo
• Resources: additional optional configuration
• Storage: persistent storage that can be used by the applications
• Tip: OpenShift cheat sheet
• https://is.gd/openshift_cheatsheet
53. Understanding Projects – 2
• In OpenShift, you would deploy applications (microservices). Each application consists of
different projects, where a project is a part of the application stack
• Projects: a project is a Kubernetes namespace that contains all services running in the
OpenShift application and works as a strictly separated environment
• Useful in multi-tenant deployment; where customer A and customer B can have a completely
separated environment
• Namespace are implemented by the Linux kernel; it separates the network, filesystem, etc
• Specific users may have access to specific projects only
• Type oc config get-contexts to see all current projects (all users) and oc projects to see
your current projects (your account)
• After logging in, you’ll see which projects you have access to
• Use oc project myproject to switch to a different project
• Resources will always be specific to a project
• If you run an application in a project, it will not be visible in another project
54. Demo: Creating an Application
• From Catalog, select PHP, version 7.1
• Provide a name to the application
• Specify the git repository to use
• https://github.com/WordPress/wordpress.git
• Click Create to launch, next close that window
• Now get to Overview, where you can see the application is being built. Click it to see
details
• Now, select Builds where you can see the actual application
• Further click on the application details to explore what it is doing
• At the end of the build, an image is created and pushed to the OpenShift container
registry
• Check success in the Events log
• Check routes, it contains the DNS name to get to the application
55. Understanding Resource Types: Pods and
Namespaces
• OpenShift runs containers
• But OpenShift doesn’t manage containers, it manages pods
• It uses Deployment Config to manage pods
56. Understanding Resource Types
• The result of your efforts in OpenShift, is a microservice – also referred as an app
• The app is created in an OpenShift Project, which corresponds to a Kubernetes
namespace – an isolated environment implemented by the Linux kernel
• An app consists of different resources – like a building block
• The resource types are specified in the OpenShift API
• OpenShift API defines the resources types, if the API is updated, new resource will be
available
• As OpenShift is built on top of Kubernetes, most resource types from the
Kubernetes API are also supported
• There are two options to create an app (and all required resources)
• Use oc new-app
• Create a manifest file in YAML to identify all the different resources
57. Understanding Namespaces
• Namespace is an important part of OpenShift, from architect point of view
• A Kubernetes namespace is a group of isolated resources that behaves as a cluster, in OpenShift
we call this a project
• Namespaces implement isolation at the Linux kernel level and are available at the different levels
• mount -> filesystem; only present specific one specific area of the filesystem
• PID -> process table, each container only can see its own PID table only, you cannot see what’s happening in
another namespace
• network -> makes every namespace an isolated network, each namespace can only be communicated
through routing
• IPC -> inter-process communication is limited only to processes within the namespaces, not possible to make
communication to outside of the namespace
• User ID -> you can have user with the same id and name in a different namespace, as if they are in different
computer
• Cgroup -> Linux feature that allows to do resource allocation, to make sure that every container has dedicated
RAM, CPU cycles, and so on
• Because of using namespaces, strictly isolated environment can be implemented
58. Understanding Pods
• An application is defined in an image
• Analogy: it’s like an ISO file, installer
• A container is a run-time instance of an image
• A Pod is a solution to run groups of containers
• Using Pods allows you to group multiple applications
• Usually we will only have one container in a Pod, as it is the Microservices best
practices
• Containers in a pod have an isolated pid namespace and filesystem
namespace, but share the same network namespace, volumes, and
hostname.
• Containers in the Pod will always run on the same host
• It’s not possible to spread out containers if they are in the same pod
59. Demo
• oc whoami
• oc get pods
• Get information
• -build is revealing information of build process, getting source from the repo
etc.
• oc get all
• We did not create pod, we created application
• It lists all the components / resources when we created the application
• The most important here is the deploymentconfig; it is what will be used to
run the different pods
60. Create a yaml file to create pod – helloworld.yaml
apiVersion: v1
kind: Pod
metadata:
name: examplepod
spec:
containers:
- name: ubuntu
image: ubuntu:latest
command: [“echo”]
args: [“hello world”]
62. Understanding Deployment Config
• To run Pods, you’ll start a Deployment Config, as these add useful
features to the Pods
• From user’s perspective, we’re creating a new app, and creating a new app is
creating a new deployment config
• One of the feature: Replication Controller, which takes care of the replication
of pods, and is a part of the Deployment
• Update Strategy is also a part of the deployment
• Rolling update: maintains the desired amount of pods
• Recreate: stops all Pods and deploys new Pods
• Custom: allows you to run any command in the deployment
• Triggers define when a new deployment should be created
63. Understanding Deployment Triggers
• When critical components change, you would like a new deployment
to be generated automatically
• Use oc describe on a deployment and look for triggers to figure out
the default triggers
• ConfigChange: triggers a new deployment on configuration change
• Image: triggers a new deployment when a new image is available
• Manual triggers can be issued, using oc deploy myapp –latest
64. Understanding Replication Controllers
• The Replication Controller (RC) is a part of the Deployment Config
• RC uses labels and selectors to track availability of Pods
• Every pod by default has a label
• Manual labels can be set as well
• The RC uses a selector to specify which labels should be used
• Use oc get pods –show-labels to show the labels that the OS has
automatically added
• Use oc describe rc <name> to see the current selector that is used
65. Understanding Services and Route
• If you look at the overview tab in OpenShift, you can see available
applications, including the URL you need to access the application
• On replicated application, there’s a load balancer behind to decide
which pod to connect to
• The service takes care of load balancing, and gives one identity
• The route is what gives a published URL, and what allows access to
the application from outside the cluster
• Route on K8s is based on the ingress controller but needs additional
configuration
66. Demo
• oc get dc
• Demo app; in web it’s called app
• Triggered by, deployment will be triggered by these values
• oc get rc
• Information about the replication controller
• How many replica are there?
• oc get pods --show-labels
• The labels are shown here, app=demo-app, etc
• It connects the pods to the deploymentconfig
• oc describe rc demo-app-1
• We can see the complete configuration
• Name, namespace, selector, labels, replica, strategy, status, containers, image etc
67. Demo: Managing Resources from the
Command Line
• oc login –u developer –p anything
• oc new-project firstproject
• oc new-app --docker-image=nginx:1.14 –name=nginx
• oc status (use repeatedly to trace the process)
• oc get pods
• oc describe pod <podname>
• oc get svc
• oc describe service nginx
• oc port-forward <podname> 33080:80
• curl –s http://localhost:33080
68. Demo: Creating another App
• oc whoami
• oc new-project mysql
• oc new-app --docker-image=mysql:latest --name=mysql-openshift –e
MYSQL_USER=myuser –e MYSQL_PASSWORD=password –e
MYSQL_DATABASE=mydb –e MYSQL_ROOT_PASSWORD=password
• oc status –v
• oc get all
• oc get pods –o=wide
• Login to the webconsole and see the new app in different project
69. Using Source-to-Image to Create Applications
• An important part of OpenShift that allows developers to
automatically build container based on source code on a git repo
70. Understanding S2I
• To create Images automatically, a Dockerfile could be used
• Source 2 Image (S2I) takes application source code from a source
control repository (such as Git) and builds a base container based on
that to run the application
• While doing so, the image is pushed to the OpenShift registry
• Using S2I allows developers to build running containers without the
need to know anything about the specific OS platform
• S2I also makes it easy to patch: after updating the application code a
new image is generated
• This process is handled as a rolling upgrade
71. Image and Image Streams
• OpenShift works with Image Streams
• An Image Stream is a consolidated view on related images
• An image is a runtime template that contains all data that is needed to run
a container
• This includes metadata that describes image needs and capabilities
• Images in an image stream are identified by a tag, and can be specified as
such
• image=nginx:1.8
• Two types of images exist
• Builder images are used in the S2I process to build applications
• The result is a runtime image that is used to start an application
• Like an ISO file that used to spin up the application
72. Exploring Builder Images
• Default Builder Images are available in OpenShift
• Check the Catalog in the browser interface
• PHP, etc
• Or use oc get is –n openshift for an overview
• Alternatively, builder images can be created by the administrator
73. Understanding the S2I flow
• To build an image based on source code, base image is required, this image
is known as the builder image and is used as a runtime environment
• Base builder image such as Python and Ruby are included
• Builder Images are available in the catalog that you see in the web interface
• When either the application source code or the builder image gets
updated, a new container image can be created
• Applications need to be updated after a change of either the application
code, or the builder image itself
• Applications are built against image streams, which are resources that
name specific container images with image stream tags
• The base S2I images may be obtained from a trusted repository, or can be
self-built
74. Building an Application - 1
• The oc new-app command is used to build the application from a Git repository
• Use oc new-app php~http://github.com/sandervanvugt/simpleapp --name=myapp to build the
application from the git repository
• In this command, the php part in front of the URL indicates the image stream that is to be used
• If no image stream is given, the oc new-app command tries to detect which image stream is used based
on the presence of some files
• Use oc-o yaml new-app php~ http://github.com/sandervanvugt/simpleapp --name=myapp >
s2i.yaml to automatically generate a YAML definition file that contains all resources to be created
• The app itself is NOT created
75. Building an Application - 2
• After creating the new application, the build process starts. Type oc
get builds for an overview
• A buildconfig can be used to trigger a new build
• The BuildConfig pod is responsible from creating images in OpenShift and
pushing them to internal Docker registry
76. Explore New App YAML file
• Kind: Imagestream
• Kind: BuildConfig
• Source: describe where the source is coming from
• Strategy: Defining how we want to build the source
• Kind: DeploymentConfig
• Labels that we have set
• Number of replicas
• Containers that has been built in previous step
• :latest as the latest image
77. Demo: Building an Application
• oc logs –f bc/simple-app to track the progress
• oc status – simpleapp is now deployed
• oc get all
• Now we have the pod, replicationcontroller, service, deploymentconfig, and
buildconfig
• oc get builds
• Info about the build that we just done
• oc describe builds simple-app-1 (name of the build from prev
command)
78. Basic OpenShift Troubleshooting
• oc get events will show recent events
• oc logs <podname> will show what has happened on a specific pod
• oc describe pod <podname> will show all pod details
• oc projects will show all projects, you might be in the wrong project!
• oc delete all –l app=simpleapp will delete everything using that label
• When we create an app we also create a Pod, DeploymentConfig,
ReplicationController, BuildConfig, etc. It’s better we delete all of them based
on the label
• oc delete all –all
• Delete everything in the current project
82. Understanding OpenShift SDN
• On Docker, containers connect to host-only virtual bridge
• Communication with containers on other hosts goes through port mapping
• Container ports are bound to ports on the host
• OpenShift SDN decouples the control plane from the data plane and thus
implements SDN
• SDN is implemented with plugins
• A plug-in adds knowledge about specific networking to the infrastructure
• The cluster network is created using Open vSwitch
• Master nodes do not have access to containers, unless this was specifically enabled
• This is a security feature
83. Understanding OpenShift SDN Plug-ins
• ovs-subnet: provides a flat pod network where every pod can communicate
with every other pod and service
• It is an Open vSwitch plugin, hence ovs
• ovs-multitenant: isolating networking to project.
• Each project get its own Virtual Network ID
• Pods can only communicate with Pods that share this VNID
• Pods with VNID 0 can communicate with all other pods and vice-versa
• Usually for management / administrative pods
• The default project (all the management containers for OpenShift) has a VNID of 0
• ovs-networkpolicy: allows administrators to define their own policies
• To do so, NetworkPolicy objects are used
84. Understanding Pod Networking
• Each pod has its own unique IP Address
• Containers within a pod behave as if they are all on the same host
• As mentioned previously, each pod usually only has one container
• As a result, pods are treated like physical or virtual machines
• To access Pods, services are used
85. Understanding Pod Networking
Pod1
IP Addr
C1 C2 C3
Each containers can only be
accessed through ports, as it has
only 1 IP address from the
outside
86. Understanding Services
• Services implement round-robin load balancing to access pods
• We can have multiple pods that is similarly presented to the end user; let’s say we have
replicas
• We need to load balance them
• The service has a stable IP address and allows communication with pods for
external clients
• Services also allow replicated pods to communicate to one another
• Services use a selector attribute to connect to Pods
• Each pod matching the selector is added to the service resource as an endpoint
• Pod as well as service IP addresses cannot be reached from outside the cluster
(pod uses a private IP)
• We will use a router instead, to be able to access the pods externally
87. Understanding Services
- apiVersion: v1
kind: Service
metadata:
labels:
app: my-app
name: my-app
spec:
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
nodePort: 38080
selector:
app: my-app
deploymentconfig: my-app
type: NodePort
Selector for choosing which app is
going to be managed by this service
Exposed port
88. Getting Traffic in and out of the Cluster
Three methods exist for clients that need access to the OpenShift service
• HostPort/HostNetwork: clients can reach the Pod directly by using forwarded
ports. Ports in the pod are bound to pods on the host where they are running.
Escalated privileges are required to use this method
• Not flexible, as it requires privilege escalation, thus not very common
• NodePort: the service is exposed by binding to available ports on the node host.
The node host proxies connections to the service IP address
• NodePort supports any traffic type
• Nodeports are in the range of 30000-32767 by default. This can be changed
• If not specified, a random nodePort is assigned by OpenShift.
• One usually specifies the port in the default range, as shown in previous YAML example
• OpenShift routes: services are exposed using a unique URL
• Routes support HTTP, HTTPS, TLS with SNI and WebSockets only
• Web based protocol, like a reverse proxy
89. How They All Interconnect To Each Other
P1 P2 P3 P4
S: 8080 S: 8080
RR - LB
Nodeport Nodeport
VIP: 1.2.3.4
Route
External
(DNS)
90. Understanding Routes
• OpenShift routes allow network access to pods from outside the OpenShift
environment
• If you want your app to be accessed by external users, you will need route
• A dedicated router pod is used to load-balance traffic between the target
Pods
• The router pod uses HAProxy and can be scaled itself
• The router pod queries the Etc database on the OpenShift master to get
information about the Pods
• The router exposes a public-facing IP address and DNS hostname to the
internal Pod networking
• Routers connect directly to the Pods; the service is used for Pod lookup
only but not involved in the actual traffic flow
92. Routers – Behind The Scene
• oc whoami
• Need to be system:admin
• oc projects
• oc get all –n default
• pod/router-xxxx
• oc describe pod/router-xxxxx –n default
93. Creating Routes
• oc expose service my-app –name my-app [--hostname=my-
app.apps.example.com] to create a route on top of an existing service
• Specify DNS name only if this name can be resolved to a wildcard DNS domain
name
• If a DNS name is not specified, a name will be automatically generated
• Alternatively, use oc create combined with a YAML or JSON file
• Note that oc new-app does NOT create a route
• Because you don’t want your newly deployed application automatically
exposed, for security reason
• Use oc delete route to un-expose a service
94. Managing Router Properties
• The default routing subdomain is set in the master-config.yaml
OpenShift configuration file
routingConfig:
subdomain: apps.example.com
• Notice that the router must be able to bind to port 80 and 443, do
NOT run a router on a host that already uses these ports for
something else
95. Understanding Router Types
• Secure routers can use several types of TLS termination
• Edge Termination: TLS is terminated at the router, and traffic from router to
Pods is not encrypted
• Pass-through Termination: the router sends TLS traffic straight through to the
Pod and the Pod is responsible for serving certificates
• Re-encryption Termination: the router terminates the TLS traffic and re-
encrypts traffic to the endpoint
• Unsecure routers don’t do TLS termination, so it is easier to setup
96. Try To Create Routes
• oc whoami
• As developer
• oc get all
• Find out what pods and service do we have
• oc expose [servicename]
• oc expose svc/httpd
• oc expose httpd –name httpd
• oc get all
• Now it’s there
• oc describe route [routername]
• oc describe httpd
• Pay attention to Requested Host:
• Endpoints: -> how we get to the Pod
98. Understanding Application Scaling
• Application Scaling is handled by the replication controller
• The replication controller ensures that the number of pods that is
specified in the replica count is running at all times
• To do so, the replication controller monitors the pods by using tags as
the selector
• This selector is a set of labels that exists in the Pod as well as in the
Replication Controller
• Replication Controllers can be managed directly, but it’s
recommended to manage them through Deployment Configs
99. Scaling Applications
The number of replicas can be scaled manually or automatically using
Autoscale
• Manual Scaling
• oc get dc
• oc scale –replicas=5 dc simpleapp
• Autoscaling
• The HorizontalPodAutoscaler resource type is used to automatically scale
based on current load on application pods
100. Understanding Autoscaling
• HorizontalPodAutoscaler used performance metrics that are collected
by the OpenShift Metrics subsystem
• If this system is in place, use autoscale dc/myapp --min 1 --max 10 --
cpu-percent=80 to automatically scale
• This command creates a HorizontalPodAutoscaler object that changes
the number of replicas such that the pods are kept below 80% of CPU
usage
101. Manual scaling
• oc –o yaml sample-app
php~https://github.com/sandervanvugt/simpleapp –
name=simpleapp > s2i.yaml
• Open the yaml file
• Goto: DeployMentConfig
• replicas: 1
• Standard replication
• Deploy the app
• oc get dc
• Now we can see the replicas
103. Understanding Pod Scheduling
• Pods by default are distributed between the nodes in a cluster
• The scheduling process can be manipulated, using different items
• Zones and Regions
• Node labels
• Affinity rules and anti-affinity rules
• All nodes, including the master can run Pods
• You should only run the web console Pod on the master
• Use the Ansible variable osm_default_node_selector to enable/disable
running pods on the master
• This is configured during installation of OpenShift cluster
104. Understanding the Pod Scheduler Algorithm
• Pod scheduling is a 3-step process
• Filter nodes
• The scheduler filters nodes according to node resources that are required by pods
• Maybe some pods require something like, an SSD storage
• Node selectors can be used in this process
• Pods can also request access to specific resources
• Prioritize the filtered list of nodes
• Affinity rules: used to ensure that Pods that belong together run close to each other
• Anti-affinity rules: ensures that Pods will not run close to each other
• Select the best fit node
• The algorithm applies to score to each node
• The node with the highest score will run the pod
105. Understanding Topology
• Topology can be applied to make scheduling easier in large datacenters
• In Topology, there is a region, zone.
• A region is a set of hosts with a guaranteed high-speed connection
between them, typically in the same geographical area
• A zone is a set of hosts that share the same infrastructure components
(network, storage, power), and for that reason might fail together
• Resources that runs in the same rack in a DC
• OpenShift can use region and zone labels in pods
• Replica pods are scheduled on nodes in the same zone by default
• Replica pods are scheduled on nodes with a different zone label
106. Setting Topology Labels
• By default, nodes get the region=infra label
• Administrators can use the oc label command to set labels on nodes
• oc label node node1.example.com region=eu-west zone=rack1 –overwrite
• oc label node node2.example.com region=eu-west zone=rack2 –overwrite
• To show nodes and their labels, use oc get node node1.example.com
–show-labels
107. Taking Down a Node
Sometimes you need to take down a node
• To take down a node, OpenShift has a two-step process
• First, mark the node as unschedulable: oc adm manage-node --
schedulable=false node1.example.com
• Next, drain the node. This will destroy all pods on the running node
so that they are created somewhere else: oc adm drain
node1.example.com
• Once finished, use oc adm manage-node --schedulable=true
node1.example.com
108. Using Node Selectors
• Node labels and node selectors can be used to ensure a Pod is
scheduled on a specific node
• Node selectors are a label that is set on the node
• To set a node selector, change the pod definition using oc edit or oc
patch
• oc path cd myapp --patch ‘{“spec”:{“nodeSelector”:{“env”:”qa”}}}}’
109. Understanding the Default Project
• Upon installation, the default is created
• In bigger clusters, it’s a good idea to use this project to run
infrastructure pods such as the router and internal registry
• To do this, label dedicated with the region=label
• Next, use oc annotate to add this label to the namespace, using a
node selector: oc annotate –overwrite namespace default
openshift.io/node-selector=‘region=infra’
• This will make sure that the default will be serviced on specific nodes only
111. Understanding Images
• An image is a deployable runtime template that includes all that is needed
to run a container
• In OpenShift, a single image can refer to different versions of the same
image. Docker does not use version numbers, but tags to refer to specific
versions of an image
• An image stream comprises a number of container images identified by
tags
• It is a consolidated view of related images
• In OpenShift, deployments and builds can receive notifications when new
images are added, and as a result trigger a new build or deployment to be
started
112. Getting Images
• OpenShift has many ways to get an image
• Use default images from the image repositories
• Use S2I to build images based on source code
• Use Dockerfile to build your own image and store it in the internal
registry
• Use buildah to build custom images
113. Understanding Tags
• Tags are used to identify what it is that an image contains
• Tags should be set and used in a way that they are updated if a new version
is available
• myimage:v2.0.1 is a good tag
• myimage:v.2.0.1-nov20 is not a good idea
• For example, a developer that has an Apache image, can tag it with the
Apache version that is in the image, as apache:2.4
• oc tag command is used for tagging images
• oc tag nginx:latest nginx:1.12 would make that the “latest” tag always
refers to version 1.12
• So users will always use the latest software version
114. Understanding Templates
• A template is a ready-to-use file that allows you to create multiple
related objects in OpenShift in an easy way
• Templates contain not just the objects, but also the parameters that
you want to be edited
• Templates can be used to create any object
• Administrators can write their own templates in YAML or Json, or
instant app and quickstart templates can be used
115. Instant App and QuickStart Templates
• OpenShift comes with some default instant app and quickstart
templates
• These make creating applications for different languages easier
• Use the Catalog in the web interface to get started with a specific
template
• Or use oc get templates -n openshift to show templates
• oc process --parameters mysql-persistent –n openshift will show
parameters supported by a template
• oc process -o yaml -n openshift mysql-persistent shows a generated
template where all parameters have obtained a default value
116. Creating Custom Templates
• To easy creation of objects, you can create your own custom
templates
• To create an app, use oc new-app –templates=your-template
• It’s a good idea to set default parameters in the template, but you can
overwrite these parameters as well: oc new-app –template=your-
template -p WEB_SERVER=httpd
117. Demo
• oc get templates
• oc get templates –n openshift
• oc process --parameters mysql-persistent --n openshift
• oc process -o yaml -n openshift mysql-persistent
• Kind: Secret
• Contains password, username etc
• DeploymentConfig
• Replicas, name, containers with environment variables
123. Try the previous YAML with Environment
Variable
oc new-app --template=demo-template -p WEB_SERVER=httpd
124. Managing OpenShift Storage
• Understanding OpenShift Storage
• Configuring OpenShift Storage Access
• Setting Up NFS Persistent Storage
• Working With ConfigMaps
125. Understanding OpenShift Storage
• By default, container storage is ephemeral (temporary)
• OpenShift uses Kubernetes persistent volume to provide storage for pods
• In persistent storage, data is stored external to the Pod, so if the containers
shut down, the data is still available
• Persistent storage is typically some kind of networked storage provided by
the OpenShift administrator
• Persistent volumes are objects that exist independent of any Pod
• Developers create a persistent volume claim (PVC) that requires access to
persistent storage without the need to know anything about the underlying
infrastructure
126. Supported Persistent Storage
• NFS
• GlusterFS
• OpenStack Cinder
• Ceph RBD
• AWS Elastic Block Store
• GCE Persistent Disk
• Azure Disk and Azure File
• VMware vSphere
• iSCSI
• Fibre Channel
• EmptyDir
• and others
127. Persistent Volume Access Modes
• The access modes depends how nodes can access the storage
• ReadWriteOnce: a single node has read/write access (only 1 node)
• ReadWriteMany: multiple nodes can mount the volume in read/write mode
• ReadOnlyMany: the volume can be mounted read-only by many nodes
128. Determining Storage Access
• The storage access type in a PVC is matched to volumes offering
similar access modes
• If a developer define RWO in the PV Claim, then it will find a persistent
volume that matches the same RWO configuration
• Optionally, the PVC may request a specific storage class, using the
storageClassName attribute. In that case, the PVC is matched to PV’s that
have the same storageClassName set
• Force the pod to use some kind of storage
• The PVC is not connected to any specific PV in any way
• The Pod itself has a connection to the PersistentVolumeClaim, NOT to
the Persistent Voume
130. Creating PVs and PVC resources
• Objects need to be created in the right order
• First, the PersistentVolumes need to be created
• Next, the PersistentVoumeClaims are created
• Finally, the Pods are configured to use a specific PVC
131. Using NFS for Persistent Volumes
• Mapping between containers and UIDs of an NFS Server doesn’t work
as container UIDs are randomly generated
• To use NFS share as an OpenShift PV, it must match the following
requirements
• Owned by nfsnobody user and group
• Permission mode set to 700
• Exported using all_squash option
• Consider using async export option for faster handling of storage requests