CONFIDENTIAL Designator
1
WELCOME
OPENSHIFT & CLOUD NATIVE SWITZERLAND MEETUP
CONFIDENTIAL Designator
2
Thank you to our sponsor !
CONFIDENTIAL Designator
3
CONFIDENTIAL Designator
4
CONFIDENTIAL DesignatorAGENDA
5
Operators (35 min) - Simon
Automated security on CI/CD (35 min) - Raphaël
Break (40 min)
Rook.io (45 min) - Carlos
Cloud Native Automation (20 min) - Mathieu / Benoit
What we’ll be discussing today
CONFIDENTIAL Designator
6
What are Operators, how to use them
including practical demo
Operators
Simon Reber
Principal Technical Account Manager - Openshift
Red Hat
Operators
Simon Reber
Technical Account Manager
7
For my new project I need
kafka on
OpenShift!
Can you do this?
Introduction to Operators
Generally Available
Yesterday at the Daily Stand-Up
Of course -
I just need
some time!
Introduction to Operators
Generally Available
Spending hours bouncing back and forth in yml
apiVersion: v1
kind: Service
metadata:
name: kafka-service
spec:
selector:
app: kafka-app
ports:
- protocol: TCP
port: 80
targetPort: 8000
[root@ocp ~]# oc create -f kafka.yml
Error on line 1: v1 does not exist
[root@ocp ~]# oc create -f kafka.yml
Error on line 24: syntax error
[root@ocp ~]# oc create -f kafka.yml
Error on line 26: syntax error
[root@ocp ~]# oc create -f kafka.yml
Success ->
[root@ocp ~]#
Cool thanks!
Are you aware of the
new version released
yesterday?
Introduction to Operators
Generally Available
5 days and endless tries and errors later
It works!!!
Introduction to Operators
Generally Available
Introduction to Operators
Generally Available
bouncing back and forth in yaml - AGAIN???
apiVersion: v1
kind: Service
metadata:
name:
example-service
spec:
selector:
app: example-app
ports:
- protocol: TCP
port: 80
targetPort: 8000
[root@ocp ~]# oc create -f kafkav2.yml
Error on line 1: v1 does not exist
[root@ocp ~]# oc create -f kafkav2.yml
Error on line 24: syntax error
[root@ocp ~]# oc create -f kafkav2.yml
Error on line 26: syntax error
[root@ocp ~]# oc create -f kafkav2.yml
Success ->
[root@ocp ~]#
Introduction to Operators
Generally Available
And what about:
● The next version
● Lifecycle in general
● Backups
● Resizing
● Healing
● Configuration changes
● Monitoring/Telemetry
● ....
Introduction to Operators
Generally Available
Every application on any platform must
be installed, configured, managed, and
upgraded over time
Patching is critical to security
Operators
15
16
Operators
Generally Available
An Operator is a Site Reliability Engineer
implemented in software in a
Kubernetes-native way
17
Operators
Generally Available
For Builders and the community
● Easily create application on Kubernetes via a common method
● Provide standardized set of tools to build consistent apps
For application consumers and Kubernetes users
● Keep used apps up to date for security and app lifecycle management
● Consume Kube-native applications easily and correctly
https://github.com/operator-framework
18
Operators
Generally Available
OPERATOR MATURITY MODEL
Phase I Phase II Phase III Phase IV Phase V
Basic Install
Automated application
provisioning and
configuration management
Seamless Upgrades
Patch and minor version
upgrades supported
Full Lifecycle
App lifecycle, storage
lifecycle (backup, failure
recovery)
Deep Insights
Metrics, alerts, log
processing and workload
analysis
Auto Pilot
Horizontal/vertical scaling,
auto config tuning, abnormal
detection, scheduling tuning
Operator Deployment
Custom Resource
Definitions
RBAC
API Dependencies
Update Path
Metadata
Operators as a First-Class Citizen
19
YourOperator v1.1.2
Bundle
OPERATOR
LIFECYCLE MANAGER
Deployment
Role
ClusterRole
RoleBinding
ClusterRoleBinding
ServiceAccount
CustomResourceDefinition
Operators
Generally Available
Operator Lifecycle Management
20
OPERATOR
LIFECYCLE MANAGER
YourOperator v1.1.2
YourOperator v1.1.3
Subscription for
YourOperator
Time
Version
YourApp v3.0
YourApp v3.1
Operator Catalog
Operators
Generally Available
OperatorHub.io for community
21
The public registry for finding
Kubernetes Operator backed
services
Operators
OperatorHub in OpenShift
22
● Discovery/install/upgrade of Operators
● Community, Red Hat products, Certified ISVs
● Granular access via specific Projects
● Developers can’t see admin screens
● Operator capabilities are exposed in Catalog
● Self-service management
Operators
DEMO TIME!
23
CONFIDENTIAL Designator
25
See how Openshift can trigger CI/CD
features to keep your containers up-to-date
when security vulnerabilities are discovered
Automated Security
in CI/CD
Raphaël Pinson
Infrastructure Developer & Training Leader
Camptocamp
Automated Security Patching
in the release process
OpenShift meetup Geneva
www.camptocamp.com
Infrastructure as Code (IaC)
Definition
Infrastructure as Code (IaC) is a method to provision and manage IT
infrastructure through the use of source code, rather than through
standard operating procedures and manual processes.
You’re basically treating your servers, databases, networks, and other
infrastructure like software. This code can help you configure and deploy
these infrastructure components quickly and consistently.
IaC helps you automate the infrastructure deployment process in a
repeatable, consistent manner, which has many benefits.
www.camptocamp.com
IaC best practices
■ Codify everything
■ Use version control
■ Define Code Review Processes
■ Continuously test, integrate and deploy
■ Document as little as needed
www.camptocamp.com
Release management & deployments
GitOps - Operation by merge requests
■ The entire system state is under version control
■ A single Git repository describes one or multiple namespaces.
This is related to access permissions.
■ Operational changes are made by merge request
■ Rollback and audit logs are provided via Git
■ When disaster strikes, the whole infrastructure can be quickly
restored from Git
31/
Container Patching Challenges
■ Monitor upstream images on security
patches (or other changes)
■ Rebuild deployment image with
patched upstream image
■ Keep history of image references for
each release
■ Deploy patched deployment image in
development and/or integration
environment
■ Promotion to production
32/
Container Patching Using Openshift
ImageStream and Custom Build Strategy
■ ImageStream is set for listening on
image changes
■ Custom Build is Triggered to update
upstream Image references in source
repo
■ As the source code changed, default
build & deploy pipeline are executed
■ Finally, just review the diffs and accept
the generated merge request
33/
Container Patching Demo Goal
As soon as an image used in the openshift cluster is updated (including for security
patches), you’ll find a brand new merge request in the release management repository
asking you if you want to deploy it.
34/
■ Helm charts & Helmfile for release management
■ Gopass for encrypted secret management
■ Gitlab for:
○ Source version control
○ Private docker registry
○ Continuous integration
○ Continuous deployment
Container Patching Demo Tools
35/
Container Patching Demo Setup
36
Questions?
CONFIDENTIAL Designator
37
Break - 40 min
CONFIDENTIAL Designator
38
What’s the next storage solution for
Openshift ?
Rook.io
Carlos Torres
Storage Solutions Architect
Red Hat
What's the next storage solution for
OpenShift ?
Carlos Domingo Torres
EMEA Specialist Solutions Architect
ctorres@redhat.com
Agenda
● What and Why?
● Architecture
● Use cases
● Sizing
● Demo
RED HAT CONFIDENTIAL
RED HAT CONFIDENTIAL
What & Why
41
WHAT IS IT?
Add-On for OpenShift for running stateful apps
Highly scalable, production-grade persistent storage
● For stateful applications running in Red Hat®
OpenShift
● Optimized for Red Hat OpenShift Infrastructure services
● Developed, released and deployed in synch with Red Hat OpenShift
● Full stack supporte by single vendor Red Hat
● Complete persistent storage fabric across hybrid cloud for OCP
Complexity. Cost. Scale.
WHY IS STORAGE IMPORTANT FOR CONTAINERS?
Top five challenges with
container adoption
1. Persistent storage
2. Data management
3. Multi-cloud or cross-data
center
4. Networking
5. Scalability
WHY DO YOU NEED PERSISTENT STORAGE?
For infrastructure and stateful applications
Local/EphemeralOCP ApplicationOCP Infrastructure
Service 2
Service 1
RWX/RWO backed by File, Block, S3
Registry
Metrics
Prometheous
Logging
RED HAT CONFIDENTIAL
RED HAT CONFIDENTIAL
Architecture
45
RED HAT OPENSHIFT CONTAINER STORAGE
aka RHOCS or OCS, v4.2 Technology Stack
● Orchestrator for Ceph storage services in OpenShift
● Responsible to simplify and automate the storage lifecycle
● Fully compliant with the new CSI kubernetes storage standard
● Multiprotocol storage offers Block, File and Object interface
● Self-healing, self-management and rock solid technology
● Scale-Up and Scale-Out, performance and capacity at scale
● Multi Cloud Gateway enables S3 federation
● Provides elastic S3 data placement and improves security
● Multi-Cloud, Hybrid-cloud, Multi-Site Buckets
Rook Operator
for Ceph
Kubernetes
API
New Objects:
Storage
Clusters
Storage Pools
Object Store
File Store
Objects:
Deployments
DaemonSets
Pods
Services
StorageClass / PV / PVC
ClusterRole
Namespace
Config Maps
Cluster PODs
Management
Ceph CSI Driver
Client Pods
(RBD/CephFS
Clients)
Attach/Mount
Snapshot
ROOK ARCHITECTURE
Operator in OpenShift
CEPH COMPONENTS
Storage services
RBD
A reliable, fully distributed block device with
cloud platform integration
RGW
A web services gateway for object storage,
compatible with S3 and Swift
APP HOST/VM
LIBRADOS
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby
CEPHFS
A distributed file system with POSIX semantics
& scale-out metadata
CLIENT
RADOS
A software-based reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes and
lightweight monitors
App Bucket
Bucket
Claim
Create new bucket
Create new account
Read
Write
NOOBAA
S3 Federation with multicloud gateway
RHOCS ARCHITECTURE
with Operator Lifecycle Manager (OLM)
S3 Object
Cephfs FileRBD Block
S3 with MCG
RED HAT CONFIDENTIAL
RED HAT CONFIDENTIAL
USE CASES
53
● For transactional workloads
● Low latency
● Messaging
● Unpredictable change
frequency and profile
Persistent Volume
● POSIX-compliant shared file system
● Interface for legacy workloads
● CI/CD Pipelines
● AI/ML Data Aggregation/Ingestion
● Messaging
Shared File SystemBlock
● Media, AI/ML training data,
Archiving, Backup, Health Records
● Streaming throughput oriented
● Object API (S3/Blob)
Object Service
USE CASES AND DATA SERVICES
Support for multiple use cases
COMMON OCP WORKLOAD
Workload Category Examples What storage?
etcd Etcd backing store Local Storage
Registry OCP registry, Quay OCS-Object Quay (TBD)
Logging ElasticSearch OCS-block, Local Storage
Metrics Prometheus, Cassandra + Hawkular OCS-block, Local Storage
Relational DBs MySQL, PGSQL OCS-block, OCS-file
NoSQL DBs mongoDB, Couchbase OCS-block, OCS-file
WebApps Nginx OCS-file, OCS-Object
Messaging JBoss AMQ, Kafka OCS-file, OCS-block, OCS-Object
In Memory/Caching Redis, JBoss Data Grid OCS-block
CI/CD Jenkins, Maven, CircleCI OCS-block, OCS-file, OCS-Object
(https://access.redhat.com/articles/3403951)
ANATOMY OF A TYPICAL CUSTOMER AI PROJECT
Data Layer
Classical HDFS,
physically
POSIX FS
Data Lake
Object Storage
Data Lake
Compute Layer
CPU GPU FPGA TPU
Transformation
Layer
Data
Science/ML
Layer
DEMO
57
- Performance and capacity at scale
- More efficient data protection with Erasure Coding
- Real data lake experience thanks to multiprotocol support
- S3 buckets enables sharing data sets
- S3A enables hybrid cloud strategy experience
- Huge ecosystem of partners and community adoption
Kafka
compute
instances
Hive/MapReduce
compute
instances
Gold SLA
Spark SQL
compute
Bronze SLA
Shared Data Lake
Presto
compute
instances
Platinum SLA
Interactive query
Spark
compute
instances
Silver SLA
Batch Query & Joins
Ingest ETL
SHARING DATA SETS
Advatages
RED HAT CONFIDENTIAL
RED HAT CONFIDENTIAL
SIZING
58
SIZING GUIDELINES
● MINIMUM NODES # - The MINIMUM amount of storage nodes is 3
● REPLICA SIZE # - Replica 3 (Erasure Coding planned on next releases)
● PV SUPPORTED # - Out-of-the-Box OCS4.2 supports up to 1500 PVs
● ADDITIONAL NODES - Each additional node enables for +500 PVs
● MAXIMUM NODES # - The MAXIMUM number of nodes in a cluster is 10
● MAXIMUM PV # - The MAXIMUM number of PVs can scale to 5000 PVs
● OCS NODE CONFIG # - MINIMUM OF 16vCPU AND 64GB RAM
RED HAT CONFIDENTIAL
RED HAT CONFIDENTIAL
DEMO
60
linkedin.com/company/red-hat
youtube.com/user/RedHatVideo
s
facebook.com/redhatinc
twitter.com/RedHat
RED HAT CONFIDENTIAL
Red Hat is the world’s leading provider of
enterprise open source software solutions.
Award-winning support, training, and
consulting services make
Red Hat a trusted adviser to the Fortune 500.
Thank you
62
CONFIDENTIAL Designator
63
Wheter you are in the “node-centric” or
“cloud-native” world, or somewhere in
between, automation is key to making
distributed computing (hybrid IT) a reality
Cloud Native
Automation
Mathieu Bornoz
Department Manager
Camptocamp
Benoit Quartier
Data & Infrastructure Engineer, Project leader
Camptocamp
OpenShift Meetup Geneva
Cloud Native Automation
www.camptocamp.com
Container Journey
65
www.camptocamp.com
What is Cloud Native?
● Design for the cloud, centered around APIs
● Designed as loosely coupled (micro) services
● Architected with a clean separation of stateful/stateless services
● Twelve-factor app compliant
● Container-based environments
● Isolated from server and operating system
● Manage through agile DevOps processes
● Automation & Infrastructure as Code
66
www.camptocamp.com
We don’t live in a “perfect” world
● “Old” technologies don’t die
● Containers are not the solution for everything
● Twelve-factor app remains a goal (probably a dream in many cases)
● Not all services provide APIs
● Not all companies are able to jump in the cloud
● Internal processes / policies evolve at a slower pace than technology
● Lack of know how, experience and time to acquire them
67
68
Let’s start with
automation!
www.camptocamp.com
3 ways of managing systems
● Web Interface
○ easy, not flexible, not reproducible
● CLI
○ flexible, not easily reproducible
● As Code
○ flexible, reproductible, enable collaboration
69
www.camptocamp.com
Open Source Automation Pipeline
70
www.camptocamp.com
Infrastructure Technology Stack
71
installation > operation > observability > deployment > development
www.camptocamp.com
Installation as Code
● TOOLS: Terraform, Ansible, Kubernetes, OpenShift (4)
○ Provision networks, security groups, LB
○ Provision VMs and orchestrate
○ Support for multiple cloud providers
● BENEFITS
○ Full control of all cloud resources
○ Reproducibility of environments
○ Easy decommissioning
72
www.camptocamp.com
Configuration as Code
● TOOLS: Two approaches
○ Pull approach (cfgmgmt): CFEngine, Puppet, Chef
○ Push approach (deploy): Ansible, Salt
● BENEFITS
○ No manual management of nodes
○ Reproducibility of nodes
○ Dynamic inventory
○ More confidence in environments and deployments
73
www.camptocamp.com
Operation as Code
● TOOLS:
○ K8s, Operators
○ Ansible, Puppet
○ Chef InSpec, Serverspec
○ Backups: Bivac, Velero
● BENEFITS
○ Reducing toil
○ Formalize best practices, build confidence
74
www.camptocamp.com
Observability as Code
75
● TOOLS: Elasticsearch, Prometheus, Grafana, Collectd
○ Gather logs & metrics
○ Processing, enrichment and presentation (dashboards)
○ Alerting
● BENEFITS
○ If you can't measure it, you can't improve it
○ Dashboards + flexible query languages (elasticsearch, prometheus)
www.camptocamp.com
Deployment as Code
76
● TOOLS:
○ Packaging with HELM (k8s yaml + templating),
○ Processus de déploiement automatisé, CI/CD
■ S2I, Jenkins, GitLab, Tekton, Skaffold, Draft
● BENEFITS
○ Explicitly define and code internal processes
○ Pipeline & processes for better quality & security
○ Software factory, Versioning & Rollbacks
www.camptocamp.com
Developer experience
● TOOLS:
○ Docker, docker-compose
○ Code editor extensions
○ Minikube, minishift
○ Red Hat Code Ready / Container Development Kit
● What it changes for the developers to be on a kubernetes platform:
○ More control & responsibilities (docker image, helm chart)
○ Often more difficult to setup a local development environment
○ More YAML...
77
www.camptocamp.com
Current drawback
78
www.camptocamp.com
And what about OpenShift ?
79
www.camptocamp.com
Conclusion
● In Kubernetes / Openshift, automating installation, operation, observability
& deployment is key to enjoy the benefits of the platform.
● Development best practices (code review, versioning, automated testing)
can and should be applied to system operation.
● It is necessary to build a “DevOps” organization and define processes to
sustain automation.
● Training developers and system engineers is mandatory.
80
81
Questions?
CONFIDENTIAL Designator
82
Win a prize !
CONFIDENTIAL Designator
linkedin.com/company/red-hat
youtube.com/user/RedHatVideos
facebook.com/redhatinc
twitter.com/RedHat
83ertyuiopè+a
scvbnm,.-
Red Hat is the world’s leading provider of enterprise
open source software solutions. Award-winning support,
training, and consulting services make Red Hat a trusted
adviser to the Fortune 500.
Thank you
Take the survey, thanks !
See you soon on our next meetup !

Meetup Openshift Geneva 03/10

  • 1.
    CONFIDENTIAL Designator 1 WELCOME OPENSHIFT &CLOUD NATIVE SWITZERLAND MEETUP
  • 2.
  • 3.
  • 4.
  • 5.
    CONFIDENTIAL DesignatorAGENDA 5 Operators (35min) - Simon Automated security on CI/CD (35 min) - Raphaël Break (40 min) Rook.io (45 min) - Carlos Cloud Native Automation (20 min) - Mathieu / Benoit What we’ll be discussing today
  • 6.
    CONFIDENTIAL Designator 6 What areOperators, how to use them including practical demo Operators Simon Reber Principal Technical Account Manager - Openshift Red Hat
  • 7.
  • 8.
    For my newproject I need kafka on OpenShift! Can you do this? Introduction to Operators Generally Available Yesterday at the Daily Stand-Up Of course - I just need some time!
  • 9.
    Introduction to Operators GenerallyAvailable Spending hours bouncing back and forth in yml apiVersion: v1 kind: Service metadata: name: kafka-service spec: selector: app: kafka-app ports: - protocol: TCP port: 80 targetPort: 8000 [root@ocp ~]# oc create -f kafka.yml Error on line 1: v1 does not exist [root@ocp ~]# oc create -f kafka.yml Error on line 24: syntax error [root@ocp ~]# oc create -f kafka.yml Error on line 26: syntax error [root@ocp ~]# oc create -f kafka.yml Success -> [root@ocp ~]#
  • 10.
    Cool thanks! Are youaware of the new version released yesterday? Introduction to Operators Generally Available 5 days and endless tries and errors later It works!!!
  • 11.
  • 12.
    Introduction to Operators GenerallyAvailable bouncing back and forth in yaml - AGAIN??? apiVersion: v1 kind: Service metadata: name: example-service spec: selector: app: example-app ports: - protocol: TCP port: 80 targetPort: 8000 [root@ocp ~]# oc create -f kafkav2.yml Error on line 1: v1 does not exist [root@ocp ~]# oc create -f kafkav2.yml Error on line 24: syntax error [root@ocp ~]# oc create -f kafkav2.yml Error on line 26: syntax error [root@ocp ~]# oc create -f kafkav2.yml Success -> [root@ocp ~]#
  • 13.
    Introduction to Operators GenerallyAvailable And what about: ● The next version ● Lifecycle in general ● Backups ● Resizing ● Healing ● Configuration changes ● Monitoring/Telemetry ● ....
  • 14.
    Introduction to Operators GenerallyAvailable Every application on any platform must be installed, configured, managed, and upgraded over time Patching is critical to security
  • 15.
  • 16.
    16 Operators Generally Available An Operatoris a Site Reliability Engineer implemented in software in a Kubernetes-native way
  • 17.
    17 Operators Generally Available For Buildersand the community ● Easily create application on Kubernetes via a common method ● Provide standardized set of tools to build consistent apps For application consumers and Kubernetes users ● Keep used apps up to date for security and app lifecycle management ● Consume Kube-native applications easily and correctly https://github.com/operator-framework
  • 18.
    18 Operators Generally Available OPERATOR MATURITYMODEL Phase I Phase II Phase III Phase IV Phase V Basic Install Automated application provisioning and configuration management Seamless Upgrades Patch and minor version upgrades supported Full Lifecycle App lifecycle, storage lifecycle (backup, failure recovery) Deep Insights Metrics, alerts, log processing and workload analysis Auto Pilot Horizontal/vertical scaling, auto config tuning, abnormal detection, scheduling tuning
  • 19.
    Operator Deployment Custom Resource Definitions RBAC APIDependencies Update Path Metadata Operators as a First-Class Citizen 19 YourOperator v1.1.2 Bundle OPERATOR LIFECYCLE MANAGER Deployment Role ClusterRole RoleBinding ClusterRoleBinding ServiceAccount CustomResourceDefinition Operators Generally Available
  • 20.
    Operator Lifecycle Management 20 OPERATOR LIFECYCLEMANAGER YourOperator v1.1.2 YourOperator v1.1.3 Subscription for YourOperator Time Version YourApp v3.0 YourApp v3.1 Operator Catalog Operators Generally Available
  • 21.
    OperatorHub.io for community 21 Thepublic registry for finding Kubernetes Operator backed services Operators
  • 22.
    OperatorHub in OpenShift 22 ●Discovery/install/upgrade of Operators ● Community, Red Hat products, Certified ISVs ● Granular access via specific Projects ● Developers can’t see admin screens ● Operator capabilities are exposed in Catalog ● Self-service management Operators
  • 23.
  • 25.
    CONFIDENTIAL Designator 25 See howOpenshift can trigger CI/CD features to keep your containers up-to-date when security vulnerabilities are discovered Automated Security in CI/CD Raphaël Pinson Infrastructure Developer & Training Leader Camptocamp
  • 26.
    Automated Security Patching inthe release process OpenShift meetup Geneva
  • 27.
    www.camptocamp.com Infrastructure as Code(IaC) Definition Infrastructure as Code (IaC) is a method to provision and manage IT infrastructure through the use of source code, rather than through standard operating procedures and manual processes. You’re basically treating your servers, databases, networks, and other infrastructure like software. This code can help you configure and deploy these infrastructure components quickly and consistently. IaC helps you automate the infrastructure deployment process in a repeatable, consistent manner, which has many benefits.
  • 28.
    www.camptocamp.com IaC best practices ■Codify everything ■ Use version control ■ Define Code Review Processes ■ Continuously test, integrate and deploy ■ Document as little as needed
  • 29.
    www.camptocamp.com Release management &deployments GitOps - Operation by merge requests ■ The entire system state is under version control ■ A single Git repository describes one or multiple namespaces. This is related to access permissions. ■ Operational changes are made by merge request ■ Rollback and audit logs are provided via Git ■ When disaster strikes, the whole infrastructure can be quickly restored from Git
  • 31.
    31/ Container Patching Challenges ■Monitor upstream images on security patches (or other changes) ■ Rebuild deployment image with patched upstream image ■ Keep history of image references for each release ■ Deploy patched deployment image in development and/or integration environment ■ Promotion to production
  • 32.
    32/ Container Patching UsingOpenshift ImageStream and Custom Build Strategy ■ ImageStream is set for listening on image changes ■ Custom Build is Triggered to update upstream Image references in source repo ■ As the source code changed, default build & deploy pipeline are executed ■ Finally, just review the diffs and accept the generated merge request
  • 33.
    33/ Container Patching DemoGoal As soon as an image used in the openshift cluster is updated (including for security patches), you’ll find a brand new merge request in the release management repository asking you if you want to deploy it.
  • 34.
    34/ ■ Helm charts& Helmfile for release management ■ Gopass for encrypted secret management ■ Gitlab for: ○ Source version control ○ Private docker registry ○ Continuous integration ○ Continuous deployment Container Patching Demo Tools
  • 35.
  • 36.
  • 37.
  • 38.
    CONFIDENTIAL Designator 38 What’s thenext storage solution for Openshift ? Rook.io Carlos Torres Storage Solutions Architect Red Hat
  • 39.
    What's the nextstorage solution for OpenShift ? Carlos Domingo Torres EMEA Specialist Solutions Architect ctorres@redhat.com
  • 40.
    Agenda ● What andWhy? ● Architecture ● Use cases ● Sizing ● Demo
  • 41.
    RED HAT CONFIDENTIAL REDHAT CONFIDENTIAL What & Why 41
  • 42.
    WHAT IS IT? Add-Onfor OpenShift for running stateful apps Highly scalable, production-grade persistent storage ● For stateful applications running in Red Hat® OpenShift ● Optimized for Red Hat OpenShift Infrastructure services ● Developed, released and deployed in synch with Red Hat OpenShift ● Full stack supporte by single vendor Red Hat ● Complete persistent storage fabric across hybrid cloud for OCP
  • 43.
    Complexity. Cost. Scale. WHYIS STORAGE IMPORTANT FOR CONTAINERS? Top five challenges with container adoption 1. Persistent storage 2. Data management 3. Multi-cloud or cross-data center 4. Networking 5. Scalability
  • 44.
    WHY DO YOUNEED PERSISTENT STORAGE? For infrastructure and stateful applications Local/EphemeralOCP ApplicationOCP Infrastructure Service 2 Service 1 RWX/RWO backed by File, Block, S3 Registry Metrics Prometheous Logging
  • 45.
    RED HAT CONFIDENTIAL REDHAT CONFIDENTIAL Architecture 45
  • 46.
    RED HAT OPENSHIFTCONTAINER STORAGE aka RHOCS or OCS, v4.2 Technology Stack ● Orchestrator for Ceph storage services in OpenShift ● Responsible to simplify and automate the storage lifecycle ● Fully compliant with the new CSI kubernetes storage standard ● Multiprotocol storage offers Block, File and Object interface ● Self-healing, self-management and rock solid technology ● Scale-Up and Scale-Out, performance and capacity at scale ● Multi Cloud Gateway enables S3 federation ● Provides elastic S3 data placement and improves security ● Multi-Cloud, Hybrid-cloud, Multi-Site Buckets
  • 48.
    Rook Operator for Ceph Kubernetes API NewObjects: Storage Clusters Storage Pools Object Store File Store Objects: Deployments DaemonSets Pods Services StorageClass / PV / PVC ClusterRole Namespace Config Maps Cluster PODs Management Ceph CSI Driver Client Pods (RBD/CephFS Clients) Attach/Mount Snapshot ROOK ARCHITECTURE Operator in OpenShift
  • 50.
    CEPH COMPONENTS Storage services RBD Areliable, fully distributed block device with cloud platform integration RGW A web services gateway for object storage, compatible with S3 and Swift APP HOST/VM LIBRADOS A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby CEPHFS A distributed file system with POSIX semantics & scale-out metadata CLIENT RADOS A software-based reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes and lightweight monitors
  • 51.
    App Bucket Bucket Claim Create newbucket Create new account Read Write NOOBAA S3 Federation with multicloud gateway
  • 52.
    RHOCS ARCHITECTURE with OperatorLifecycle Manager (OLM) S3 Object Cephfs FileRBD Block S3 with MCG
  • 53.
    RED HAT CONFIDENTIAL REDHAT CONFIDENTIAL USE CASES 53
  • 54.
    ● For transactionalworkloads ● Low latency ● Messaging ● Unpredictable change frequency and profile Persistent Volume ● POSIX-compliant shared file system ● Interface for legacy workloads ● CI/CD Pipelines ● AI/ML Data Aggregation/Ingestion ● Messaging Shared File SystemBlock ● Media, AI/ML training data, Archiving, Backup, Health Records ● Streaming throughput oriented ● Object API (S3/Blob) Object Service USE CASES AND DATA SERVICES Support for multiple use cases
  • 55.
    COMMON OCP WORKLOAD WorkloadCategory Examples What storage? etcd Etcd backing store Local Storage Registry OCP registry, Quay OCS-Object Quay (TBD) Logging ElasticSearch OCS-block, Local Storage Metrics Prometheus, Cassandra + Hawkular OCS-block, Local Storage Relational DBs MySQL, PGSQL OCS-block, OCS-file NoSQL DBs mongoDB, Couchbase OCS-block, OCS-file WebApps Nginx OCS-file, OCS-Object Messaging JBoss AMQ, Kafka OCS-file, OCS-block, OCS-Object In Memory/Caching Redis, JBoss Data Grid OCS-block CI/CD Jenkins, Maven, CircleCI OCS-block, OCS-file, OCS-Object (https://access.redhat.com/articles/3403951)
  • 56.
    ANATOMY OF ATYPICAL CUSTOMER AI PROJECT Data Layer Classical HDFS, physically POSIX FS Data Lake Object Storage Data Lake Compute Layer CPU GPU FPGA TPU Transformation Layer Data Science/ML Layer DEMO
  • 57.
    57 - Performance andcapacity at scale - More efficient data protection with Erasure Coding - Real data lake experience thanks to multiprotocol support - S3 buckets enables sharing data sets - S3A enables hybrid cloud strategy experience - Huge ecosystem of partners and community adoption Kafka compute instances Hive/MapReduce compute instances Gold SLA Spark SQL compute Bronze SLA Shared Data Lake Presto compute instances Platinum SLA Interactive query Spark compute instances Silver SLA Batch Query & Joins Ingest ETL SHARING DATA SETS Advatages
  • 58.
    RED HAT CONFIDENTIAL REDHAT CONFIDENTIAL SIZING 58
  • 59.
    SIZING GUIDELINES ● MINIMUMNODES # - The MINIMUM amount of storage nodes is 3 ● REPLICA SIZE # - Replica 3 (Erasure Coding planned on next releases) ● PV SUPPORTED # - Out-of-the-Box OCS4.2 supports up to 1500 PVs ● ADDITIONAL NODES - Each additional node enables for +500 PVs ● MAXIMUM NODES # - The MAXIMUM number of nodes in a cluster is 10 ● MAXIMUM PV # - The MAXIMUM number of PVs can scale to 5000 PVs ● OCS NODE CONFIG # - MINIMUM OF 16vCPU AND 64GB RAM
  • 60.
    RED HAT CONFIDENTIAL REDHAT CONFIDENTIAL DEMO 60
  • 62.
    linkedin.com/company/red-hat youtube.com/user/RedHatVideo s facebook.com/redhatinc twitter.com/RedHat RED HAT CONFIDENTIAL RedHat is the world’s leading provider of enterprise open source software solutions. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. Thank you 62
  • 63.
    CONFIDENTIAL Designator 63 Wheter youare in the “node-centric” or “cloud-native” world, or somewhere in between, automation is key to making distributed computing (hybrid IT) a reality Cloud Native Automation Mathieu Bornoz Department Manager Camptocamp Benoit Quartier Data & Infrastructure Engineer, Project leader Camptocamp
  • 64.
  • 65.
  • 66.
    www.camptocamp.com What is CloudNative? ● Design for the cloud, centered around APIs ● Designed as loosely coupled (micro) services ● Architected with a clean separation of stateful/stateless services ● Twelve-factor app compliant ● Container-based environments ● Isolated from server and operating system ● Manage through agile DevOps processes ● Automation & Infrastructure as Code 66
  • 67.
    www.camptocamp.com We don’t livein a “perfect” world ● “Old” technologies don’t die ● Containers are not the solution for everything ● Twelve-factor app remains a goal (probably a dream in many cases) ● Not all services provide APIs ● Not all companies are able to jump in the cloud ● Internal processes / policies evolve at a slower pace than technology ● Lack of know how, experience and time to acquire them 67
  • 68.
  • 69.
    www.camptocamp.com 3 ways ofmanaging systems ● Web Interface ○ easy, not flexible, not reproducible ● CLI ○ flexible, not easily reproducible ● As Code ○ flexible, reproductible, enable collaboration 69
  • 70.
  • 71.
    www.camptocamp.com Infrastructure Technology Stack 71 installation> operation > observability > deployment > development
  • 72.
    www.camptocamp.com Installation as Code ●TOOLS: Terraform, Ansible, Kubernetes, OpenShift (4) ○ Provision networks, security groups, LB ○ Provision VMs and orchestrate ○ Support for multiple cloud providers ● BENEFITS ○ Full control of all cloud resources ○ Reproducibility of environments ○ Easy decommissioning 72
  • 73.
    www.camptocamp.com Configuration as Code ●TOOLS: Two approaches ○ Pull approach (cfgmgmt): CFEngine, Puppet, Chef ○ Push approach (deploy): Ansible, Salt ● BENEFITS ○ No manual management of nodes ○ Reproducibility of nodes ○ Dynamic inventory ○ More confidence in environments and deployments 73
  • 74.
    www.camptocamp.com Operation as Code ●TOOLS: ○ K8s, Operators ○ Ansible, Puppet ○ Chef InSpec, Serverspec ○ Backups: Bivac, Velero ● BENEFITS ○ Reducing toil ○ Formalize best practices, build confidence 74
  • 75.
    www.camptocamp.com Observability as Code 75 ●TOOLS: Elasticsearch, Prometheus, Grafana, Collectd ○ Gather logs & metrics ○ Processing, enrichment and presentation (dashboards) ○ Alerting ● BENEFITS ○ If you can't measure it, you can't improve it ○ Dashboards + flexible query languages (elasticsearch, prometheus)
  • 76.
    www.camptocamp.com Deployment as Code 76 ●TOOLS: ○ Packaging with HELM (k8s yaml + templating), ○ Processus de déploiement automatisé, CI/CD ■ S2I, Jenkins, GitLab, Tekton, Skaffold, Draft ● BENEFITS ○ Explicitly define and code internal processes ○ Pipeline & processes for better quality & security ○ Software factory, Versioning & Rollbacks
  • 77.
    www.camptocamp.com Developer experience ● TOOLS: ○Docker, docker-compose ○ Code editor extensions ○ Minikube, minishift ○ Red Hat Code Ready / Container Development Kit ● What it changes for the developers to be on a kubernetes platform: ○ More control & responsibilities (docker image, helm chart) ○ Often more difficult to setup a local development environment ○ More YAML... 77
  • 78.
  • 79.
  • 80.
    www.camptocamp.com Conclusion ● In Kubernetes/ Openshift, automating installation, operation, observability & deployment is key to enjoy the benefits of the platform. ● Development best practices (code review, versioning, automated testing) can and should be applied to system operation. ● It is necessary to build a “DevOps” organization and define processes to sustain automation. ● Training developers and system engineers is mandatory. 80
  • 81.
  • 82.
  • 83.
    CONFIDENTIAL Designator linkedin.com/company/red-hat youtube.com/user/RedHatVideos facebook.com/redhatinc twitter.com/RedHat 83ertyuiopè+a scvbnm,.- Red Hatis the world’s leading provider of enterprise open source software solutions. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. Thank you Take the survey, thanks ! See you soon on our next meetup !