Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

What's new in open shift container platform 3.10

469 views

Published on

Strengthening capabilities for intelligent, compute-intensive applications
In this release, we’ve built on prior work and strengthened our capabilities for running computationally intensive workloads such as artificial intelligence, machine learning, animation, and financial services and payments applications. These workloads typically require more explicit access to and management of computing resources, and we’ve worked to build those capabilities within upstream Kubernetes and in OpenShift through generally available features such as:

Device Manager plugin support for vendors to easily register devices such as GPUs or FPGAs for performance-sensitive workloads within Kubernetes. We’ve worked with key partners such as NVIDIA in bringing this feature to Kubernetes for GPUs; we’ve got more detail on how to use this feature in OCP 3.10 with NVIDIA GPUs here.
CPU management for groups of compute resources, and tying specific workloads to those groups – designed for optimizing the performance of applications that need maximum CPU time.
Hugepages support to help manage hardware for applications with high memory requirements. While this capability is still beta in upstream Kubernetes (as of v1.11), it’s generally available and fully-supported in the latest release of OpenShift.
To read about these capabilities in more detail, head over to our blog post on redhat.com.

https://blog.openshift.com/red-hat-openshift-container-platform-3-10-is-now-available-for-download/

Published in: Technology
  • I’ve personally never heard of companies who can produce a paper for you until word got around among my college groupmates. My professor asked me to write a research paper based on a field I have no idea about. My research skills are also very poor. So, I thought I’d give it a try. I chose a writer who matched my writing style and fulfilled every requirement I proposed. I turned my paper in and I actually got a good grade. I highly recommend ⇒ www.HelpWriting.net ⇐
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Take the highest paid surveys! ❤❤❤ https://dwz1.cc/EWG1lhe4
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • There are over 16,000 woodworking plans that comes with step-by-step instructions and detailed photos, Click here to take a look ●●● https://url.cn/xFeBN0O4
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

What's new in open shift container platform 3.10

  1. 1. Product Management Team July 2018 What’s New in Red Hat OpenShift Container Platform 3.10
  2. 2. Purpose of Presentations ● “OpenShift Roadmap Update: What’s Next” ○ A look ahead over the next 6 - 12+ months ○ Focused on major OpenShift features / initiatives ○ Updated quarterly (goal) and subject to change ○ Useful for customers who want a general OpenShift Roadmap update ● “OpenShift Roadmap Update: What’s New in OpenShift x.y” ○ A deep dive into the next OpenShift release ○ More details on all the new features coming in the release ○ Delivered with each new OpenShift release ○ Useful for customers who want a deep dive on latest OpenShift release ● Both of these presentations are ok to use publicly ○ Decks will be available in PnT in multiple formats & via Google Slides ○ Feel free to use relevant slides, customize and make them your own ○ PM roadmap session recordings also available, but for internal use only
  3. 3. OpenShift 3.12 Is Now OpenShift 4
  4. 4. CNS (Container Native Storage) Is Now OCS (OpenShift Container Storage)
  5. 5. OCP 3.10 - The Efficient Cluster ● Resource Management ○ Descheduler (tech preview), CPU Manager, Ephemeral Storage, HugePages ● Resilience ○ Node Problem Detector, HA egress pods with DNS ● Workload Diversity ○ Device Manager, Windows Containers (dev preview) ● Installation Automation ○ TLS node bootstrapping, static pods ● Security ○ Etcd cipher coverage, Shared PID namespace options, more secured router
  6. 6. Q2 CY2018 Q3 CY2018 OpenShift Container Platform 3.10 (July) ● Kubernetes 1.10 and CRI-O option ● Istio (Tech Preview) ● oc client for developers ● TLS bootstrapping ● Windows Server Containers (Dev Preview)) ● Prometheus Metrics and Alerts (Dev Preview) ● S3 Svc Broker OpenShift Online & Dedicated ● Dedicated self-service: RBAC, templates, LB, egress ● Dedicated encrypted storage, multi-AZ, Azure beta OpenShift Roadmap Q4 CY2018 Q1 CY2019 OpenShift Container Platform 3.11 (Sept/Oct) ● Kubernetes 1.11 and CRI-O default ● Infra monitoring ,alerting with SRE intelligence, Node Problem Detector ● Etcd, and Prometheus- Tech preview ● Operator Certification Program and Red Hat Fuse Operator ● Autoscaler for AWS and P-SAP features ● Metering and Chargeback (Tech Preview) ● HPA Custom Metric ● Tech preview of OLM ● New web console for developers and cluster admins ● Ansible Galaxy ASB support ● CNV (Tech Preview) ● OVN (Dev Preview for Windows) ● FIPS and other Security PAGs OpenShift Online & Dedicated ● OpenShift Online automated updates for OS ● Chargeback for OpenShift Online Starter OpenShift Container Platform 4.0 (Dec/Jan) ● Kubernetes 1.12 and CRI-O default ● Converged Platform ● Full Stack Automated Installer ○ AWS, RHEL, OSP (tentative) ● Over the Air Updates ● RHCC integrated experience ● Windows Containers Tech Preview ● Easy/Trackable Evaluations ● Red Hat CoreOS Container Linux with Ignition Automations ● Cluster Registry ● HPA metrics from Prometheus OpenShift Online & Dedicated ● Cluster Operator driven installs ● Self-Service Dedicated User Experience OpenShift Container Platform 4.1 (March) ● Kubernetes 1.13 and CRI-O default ● Full Stack Automation ○ GCP, VMware ● Istio GA ● Mobile 5.x ● Serverless (Tech Preview) ● RHCC for non-container content ● Integrated Quay (Tech Preview) ● Idling Controller ● Federated Ingress and Workload Policy ● OVN GA ● Che (Tech Preview) OpenShift Online & Dedicated ● OpenShift.io on Dedicated (Tech Preview)
  7. 7. SERVICE CATALOG (LANGUAGE RUNTIMES, MIDDLEWARE, DATABASES, …) SELF-SERVICE APPLICATION LIFECYCLE MANAGEMENT (CI / CD) BUILD AUTOMATION DEPLOYMENT AUTOMATION CONTAINER CONTAINERCONTAINER CONTAINER CONTAINER NETWORKING SECURITYSTORAGE REGISTRY LOGS & METRICS CONTAINER ORCHESTRATION & CLUSTER MANAGEMENT (KUBERNETES) ATOMIC HOST / RED HAT ENTERPRISE LINUX OCI CONTAINER RUNTIME & PACKAGING INFRASTRUCTURE AUTOMATION & COCKPIT OpenShift = Enterprise Kubernetes+ Build, Deploy and Manage Containerized Apps
  8. 8. What’s New for 3.10: ● Remove etcd from Automation Broker and move to using CRDs ○ Broker will now use CRDs instead of a local etcd instance ● Make serviceInstance details available to the playbook ○ Exposes the details at runtime of who provisioned a service to the provision and deprovision playbooks ■ Such as OpenShift cluster dns suffix, username, namespace, ServiceInstance id ● Enhance error messages, so when a provision request fails the error is preserved and displayed to end user in web console ○ Allows APB to return custom error messages that gets surfaced by service catalog if a provisioning operation fails ○ Eases troubleshooting and improves customer experience 8 Feature(s): OpenShift Automation (Ansible) Broker Self-Service / UX
  9. 9. New AWS Services: Kinesis Data Streams Key Management Service (KMS) Lex Polly Rekognition Translate (requires Preview registration) SageMaker* Additional RDS engines: Aurora*, MariaDB, & PostgreSQL AWS Service Broker AMAZON WEB SERVICES Service Broker * Coming soon!
  10. 10. Middleware Update AMQ AMQ Streaming - Currently in Dev Preview for OCP - Uses the operator model - Preview Docs - Based on the upstream Strimzi project https://github.com/strimzi/stri mzi and the Apache Kafka prodject https://kafka.apache.org AMQ Broker - Broker 7.1 is a tech preview image - GA image will be based on AMQ Broker 7.2 coming in August Fuse Fuse 7.1 - Replaces Fabric v1 for all Fuse 6.x customers - Added Fuse on EAP image to existing Spring Boot and Karaf options - Prometheus-based reporting for metrics (Tech Preview) - JVM memory optimizations for all Fuse images - Centralized Hawtio console for integration-specific monitoring of all Fuse-based applications - Scalable XA Transaction Support (Tech Preview) Data Grid Grid 7.2 May Release - Dynamic cache creation - creation of caches in runtime, through Hot Rod or in library mode, without having to restart or interrupt the service - Native support for JSON documents. Server will convert JSON to protobuf objects and support indexing and querying. - Full-text search in client-server - Ceph as cache storage - Off-heap - Higher data density per node. Smaller clusters can store more data outside of the JVM. - Auto-resolve for data consistency - Automatic resolving conflicts during network partitions. - Client parity for C++/C# with Java - New Data Structures: Multimap and Distributed Counters 3scale 3scale June Release - Improved billing and multi- tenancy to bring the on premises version on par with the Hosted service - APIcast out-of-box policies and new modular APIcast plugin system - Mutual TLS between APIcast gateway and service backends - Oracle Database Support PAM PAM 7.0 - Container images for Process Modeling, Controller, Process Administration, Process Application, Smart Router, and Process Execution are all supported on OpenShift - Scales each stage as it demands - Execute, Administer, Consume business rules in the cloud. - Controller support to allow for failovers JWS JWS 5 July Release - Based on Tomcat 9.0.7 - HTTP/2 support - Servlet 4.0 specification - OpenSSL for TLS with JSSE connectors (NIO and NIO2) - NIO connector, which is the default for HTTP/1.1 when tomcat-native is installed - TLS virtual hosting (SNI) support - Support is provided for embedded distributions (fat JAR deployments) - Asynchronous Support for NIO2 - Transaction processing is provided through Narayana and DBCP2 (Tech Preview) - JAX-RS through Apache CXF (Will be in SP1) RHOAR NodeJS - S2I images (v10 now available, also support for v8) - Single image-stream definition between RHOAR (10, 8) and RHSCL (8,6,4) - Not included in OCP 3.10 by default - Node core distro to be delivered only through RHOAR, no stand alone SKU - Evaluating NPM modules for future support, with focus on microservice development and deployment concerns Link to more in depth slides.
  11. 11. Feature(s): Improved search within catalog Description: Show “top 5” results How it Works: ● Weighting is given based on where the match is found ● Factors include: title, description, tagging Self-Service / UX
  12. 12. Feature(s): User chooses route for application Description: Need better way to show routes for app How it Works: ● Indication that there are multiple routes ● Annotate route that you’d like to be primary Self-Service / UX console.alpha.openshift.io/overview-app-route: ‘true’
  13. 13. Feature(s): Create generic secrets Description: Allow users a way to create opaque secrets How it Works: ● User can already create secrets, expanding to opaque ● Behaves like creating config maps Self-Service / UX
  14. 14. Miscellaneous UI - Admin console progression ● 3.10 does NOT include the admin console based on Tectonic web console ● 3.11 will introduce the admin console along side the existing console ● 4.0 will converge onto a single console code base Self-Service / UX
  15. 15. Feature(s): Service Catalog CLI Description: Provision, bind services from command line How it Works: ● Full set of commands to list, describe, provision/deprovision and bind/unbind ● Based on contribution from Azure ● Separate CLI as part of RPM Self-Service / UX $ svcat provision postgresql-instance --class rh- postgresql-apb --plan dev --params-json '{"postgresql_database":"admin","postgresql_password":" admin","postgresql_user":"admin","postgresql_version":" 9.6"}' -n szh-project Name: postgresql-instance Namespace: szh-project Status: Class: rh-postgresql-apb Plan: dev Parameters: postgresql_database: admin postgresql_password: admin postgresql_user: admin postgresql_version: "9.6"
  16. 16. Miscellaneous Service Catalog ● Rename bind credential secret keys ● Improvements in reconciliation process, optimizations and removed false messages (failed to provision) ● Flexible secret management (add, remove, change) Self-Service / UX
  17. 17. DevExp / Builds Feature(s): Jenkins items ● Sync removal of build jobs - this allows for cleanup of old/stale jobs ● Jenkins updated to 2.107.3-1.1 ● Update in Jenkins build agent (slave) images - Node.js 8 - Maven 3.5
  18. 18. Dev Tools - Local Dev CDK 3.4: ● OpenShift Container Platform v3.9.14 ● Image caching is enabled by-default ● HyperV users can assign a static IP to CDK ● Hostfolder mount using SSHFS (Technical preview) ● Uses overlay as the default storage driver Minishift 1.21 / CDK 3.5 : 17-JUL-2018 ● Native hypervisor (Hyper-V/xhyve/KVM) or VirtualBox ● Run CDK against an existing RHEL7 host. ● SSHFS is default technology for hostfolder share ● Local DNS server to reduce dependency on nip.io. ● Users will be able to use OCP 3.10
  19. 19. Operator SDK Feature(s): Dev tools to build Kubernetes applications Description: Help customers/ISVs build and publish Kubernetes applications that run like cloud services, anywhere OpenShift runs How it Works: ● Includes all scaffolding code ● Only need to build the logic specific to app ● Tool to publish and use on multiple clusters ● Supports Helm chart, Ansible PB or Go code Embed unique operational knowledge Package and install on OCP clusters
  20. 20. SERVICE CATALOG (LANGUAGE RUNTIMES, MIDDLEWARE, DATABASES, …) SELF-SERVICE APPLICATION LIFECYCLE MANAGEMENT (CI / CD) BUILD AUTOMATION DEPLOYMENT AUTOMATION CONTAINER CONTAINERCONTAINER CONTAINER CONTAINER NETWORKING SECURITYSTORAGE REGISTRY LOGS & METRICS CONTAINER ORCHESTRATION & CLUSTER MANAGEMENT (KUBERNETES) INFRASTRUCTURE AUTOMATION & COCKPIT OpenShift = Enterprise Kubernetes+ Build, Deploy and Manage Containerized Apps ATOMIC HOST / RED HAT ENTERPRISE LINUX OCI CONTAINER RUNTIME & PACKAGING
  21. 21. CONTAINER CONTAINERCONTAINER CONTAINER CONTAINER NETWORKING SECURITYSTORAGE REGISTRY LOGS & METRICS CONTAINER ORCHESTRATION & CLUSTER MANAGEMENT (KUBERNETES) Clustered Container Infrastructure Applications Run Across Multiple Containers & Hosts ATOMIC HOST / RED HAT ENTERPRISE LINUX OCI CONTAINER RUNTIME & PACKAGING
  22. 22. Feature(s): Kubernetes Upstream Red Hat Blog and Commons Webinar Description: OpenShift 3.10 brings enhancements in how efficiently you can leverage the resources available from the nodes across the cluster. From ephemeral storage, CPU, memory pages, IP addresses, and other resources available to the cluster, OpenShift 3.10 more efficiently brings nodes into the cluster and exposes its resources to application services. Container Orchestration Red Hat Contributing Projects: ● API Aggregation ● CronJobs stabilizing ● Control API access from nodes ● PSP stabilizing ● Configurable pod resolv.conf ● Kubelet ComponentConfig API ● Mount namespace propagation ● PV handling with deleted pods and orphaned binds ● Ephemeral Storage Handling ● CRD subresource and categories ● Container Storage Interface ● Kubeclt extension handling OpenShift 3.10 Status of Kube 1.10 Upstream Features: https://docs.google.com/spreadsheets/d/1xdjfFVyoUaDgZXak4OHA90wq_bNIKrrc7U2xr8fKXEU/edit?usp=sharing
  23. 23. Feature(s): Localization of documentation for the Japanese market. Description: We are localizing the documentation for the APAC market starting with Japanese in this priority order. Docs Localization https://access.redhat.com/documentation/ja-jp/openshift_container_platform/3.9/html-single/installation_and_configuration/
  24. 24. Feature(s): HugePages, CPU Manager, Device Manager Description: We spoke about Device Manager here. CPU Manager Policy allows you to tell kube that your workload requires an affinity to a CPU core. Maybe your workload needs CPU cache affinity and can’t handle being bounced around to different CPU cores on the node via normal fair share scheduling on linux. HugePages allows you to request that your workload consume a specific amount of HugePages. Performance Pods How it Works: CPU manager is a flag on the kubelet that has the option of none or static. Static will cause guaranteed QoS pod access to exclusive CPU cores on the node. HugePages is a flag you set to true on the master and kubelet. The nodes will then be able to tell if any HugePages are available and workloads can request them via the pod definition. ubelet device manager CPU Manager Policy # cat /etc/origin/node/node-config.yaml ... kubeletArguments: ... feature-gates: - CPUManager=true cpu-manager-policy: - static cpu-manager-reconcile-period: - 5s kube-reserved: - cpu=500m Result: # oc exec pod-name -- cat /sys/fs/cgroup/cpuset/cpuset.cpus 2 # oc exec pod-name -- grep ^Cpus_allowed_list /proc/1/status Cpus_allowed_list: 2 HugePages # cat /etc/origin/node/node-config.yaml ... kubeletArguments: ... feature-gates: - HugePages=true Pod spec: resources: requests: cpu: 1 memory: 256Mi limits: cpu: 1 memory: 256Mi # cat /etc/origin/master/master-config.yaml ... kubernetesMasterConfig: apiServerArguments: ... feature-gates: - HugePages=true Pod spec: resources: limits: hugepages-2Mi: 100Mi Both the variable name and value are configurable.
  25. 25. Feature(s): Node Problem Detector Description: Daemon that runs on each node as a daemonSet. The daemon tries to make the cluster aware of node level faults that should make the node not schedulable. Node How it Works: When you start the node problem detector you tell it a port to broadcast the issues it find over. The detector allows you to load sub- daemons to do the data collection. There are 3 as of today. Issues found by the problem daemon can be classified as “NodeCondition” which means stop node scheduling or “Event” which are only informative. ubelet device manager Tech Preview Problem Daemons: ● Kernel Monitor: monitors kernel log via journald and reports problems according to regex patterns ● AbrtAdaptor: monitors the node for kernel problems and application crashes from journald ● CustomerPluginMonitor: allows you to test for any condition and exit on a 0 or 1 should you condition not be met.
  26. 26. Feature(s): Protection of Local Ephemeral Storage Description: Control the usage of local ephemeral storage feature on the nodes in order to prevent users from exhausting all node local storage with their pods and abusing other pods that happen to be on the same node. Node How it Works: After turning on LocalStorageCapacityIsolation, pods submitted use the limit and requested fields. Violations will result in an evicted pod. Limit: ephemeral storage request when scheduling a container to a node, then fences off the requested ephemeral storage on the chosen node for the use of the container. Request: provides a hard limit on the ephemeral storage that can be allocated across all the processes in a container ubelet device manager Tech Preview 1. Master: /etc/origin/master/master-config.yaml kubernetesMasterConfig: apiServerArguments: feature-gates: - LocalStorageCapacityIsolation=true controllerArguments: feature-gates: - LocalStorageCapacityIsolation=true 2. Node: /etc/origin/node/node-config.yaml kubeletArguments: feature-gates: - LocalStorageCapacityIsolation=true 3.) Launch pods with the following in their deploymentConfig resources: requests: ephemeral-storage: 500Mi limits: ephemeral-storage: 1Gi
  27. 27. Feature(s): Descheduler Description: Due to the fact a scheduler’s view of a cluster is at a single point in time, a overall cluster’s balance may become skewed from taints and tolerations, evictions, affinities, and other life cycle reasons such as node maintenance or new node additions. As a result, you can have some nodes become under or over utilized. Node How it Works: A descheduler is a job running in a pod that runs in the kube-system project. This descheduler finds pods based on its policy and evicts them in order to give them back to the scheduler for replacement on the cluster. It does not target static pods, those with high QoS, daemonSets, or those with local storage. ubelet device manager Tech Preview Available Policies: ● RemoveDuplicates: if this policy is set the descheduler looks for pods that are apart of the same replicaSet or deployment that happen to have been placed on the same node. It evicts the duplicates in the hope the scheduler will place them on a different node. ● LowNodeUtilization: finds nodes that are under the CPU, MEM, and # of pod thresholds you have set, it will evict pods from other nodes in the hopes the scheduler places the pods on these under utilized nodes. There is also a setting to only trigger this if you have more than X number of under utilized nodes. ● RemovePodsViolatingInterPodAntiAffinity and RemovePodsViolatingNodeAffinity: re-evaluates the pods that might have been forced to break their affinity rules and evicts them for another chance to be places on nodes that comply to their affinity or anti-affinity.
  28. 28. Feature(s): Windows Containers Description: Be able to run Windows containers on Windows Server 1709, 1803, and 2019 within a OpenShift cluster. Node How it Works: Join partnership between Microsoft and Red Hat. Microsoft will distribute and support, through our joint co-located support process, the kubelet, configuration/installation, and networking components that need to be installed on Windows. Red Hat will support the interaction with those components with the OpenShift cluster. Customers and partners can sign up for the developer preview program here. The program will start within the next 7 days. It has been delayed due to technical difficulties. ubelet device manager Dev Preview Providing in the developer preview: 1. Powershell script to satisfy container prerequisites on Windows Server 2. Installation process that allows you to install on one to many nodes without deploying an overlay network 3. Ansible playbooks to deploy and configure an experimental OVN network on the OpenShift cluster 4. Ansible playbooks to deploy and configure an experimental OVN network from CloudBase on Windows Server. And to then connect that Windows node to the OpenShift cluster Features in the first drop: 1. kubelet and pre-reqs (docker, networking plugins, etc) 2. Join Windows node to OpenShift cluster 3. Allow Windows access to certain projects (nodeSelector or taints & tolerations) 4. Work with templates in the Service Catalog 5. Attach static storage to the container 6. Scale up and down the Windows container 7. DNS resolvable URL for service to route object 8. East/west network connectivity to Linux pods 9. Delete Windows Container Video of it WORKING!!!
  29. 29. Quay Enterprise What’s next ● OCI distribution spec v2_2 ● Quay v3 (rebranded, RHEL base image) ● Clair v3 ● Operator repo support ● Open sourcing Quay * Note: Red Hat Quay releases are not tied to OpenShift releases (standalone product) Releases shipped since Apr ‘18: 2.9.0, 2.9.1, 2.9.2 , release notes here: https://coreos.com/quay-enterprise/releases/ Features since Quay 2.9.0 * : ● Support for custom query parameters on OIDC endpoints ● Search query improvements (page length and no of pages) ● Support for browser notifications ● Cleanup of expired app tokens ● collaborator views under organizations New Red Hat SKUs will be available Sep 1st for both RH and Partners
  30. 30. Feature(s): Expose registry metrics with OpenShift auth Description: Registry metrics endpoint now protected by built-in OpenShift auth How it works: ● Registry provides an endpoint for Prometheus metrics ● Route must be enabled ● Users with the appropriate role can access metrics using their openshift credentials ● An admin defined shared secret can still be used to access the metrics as well Registry
  31. 31. Feature(s): Run control plane as static pod Description: Migrate control plane to static pods to leverage self-management of cluster components and minimize direct host management How it Works: ● In 3.10 and newer, control plane components (etcd, API, and controller manager) will now move to running as static pods ○ Goal is to reduce node level configuration in preparation for automated cluster configuration on immutable infrastructure ○ Unified control plane deployment methods across Atomic Host and RHEL; everything runs atop the kubelet. ● The standard upgrade process will migrate existing clusters automatically Installation
  32. 32. Feature(s): Bootstrapped Node Configuration Description: Node configuration is now managed via API objects and synchronized to nodes How it Works: ● In 3.10 and newer, all members of the [nodes] inventory group must be assigned an openshift_node_group_name (value is used to select the configmap that configures each node) ● By default, there are five configmaps created: node-config-master, node-config-infra, node-config-compute, node-config- master-infra, & node-config-all-in-one ○ Last two place a node into multiple roles ○ Note: configmaps are the authoritative definition of node labels; the old openshift_node_labels value is effectively ignored. ● If you want to deviate from default configuration, you must define the entire openshift_node_group dictionary in your inventory. When using an INI based inventory it must be translated into a Python dictionary. ● The upgrade process will now block until you have the required configmaps in the openshift-node namespace ○ Either accept the defaults or define openshift_node_groups to meet your needs, then run playbooks/openshift- master/openshift_node_group.yml to create the configmaps ○ Review the configmaps carefully to ensure that all desired configuration items are set then restart the upgrade ● Changes to these configmaps will propagate to all nodes within 5 minutes overwriting /etc/origin/node/node-config.yaml Installation Image Reference: https://medium.com/@toddrosner/kubernetes-tls-bootstrapping-cf203776abc7
  33. 33. Feature(s): HA Setup For Egress Pods Description: In the first z-stream release of 3.10, egress pods can have HA failover across secondary cluster nodes in the event the primary node goes down. How it works: Namespaces are now allowed to have multiple egress IPs specified, hosted on different nodes, so that if the primary node fails the egress IP switches from its primary to secondary egress IP being hosted on another node. When the original IP eventually comes back, then nodes will switch back to using the original egress IP. The switchover currently takes ≤7 seconds for a node to notice that an egress node has gone down (potentially configurable in a later version). Networking NODE 2NODE 1 NAMESPACE A EXTERNAL SERVICE Whitelist: IP1, IP2 POD POD POD POD EGRESS IP 1 EGRESS IP 2
  34. 34. Feature(s): Allow DNS names for egress routers Description: The egress router can now refer to an external service, with a potentially unstable IP address, by its hostname. How it works: The OpenShift egress router runs a service that redirects egress pod traffic to one or more specified remote servers, using a pre- defined source IP address that can be whitelisted on the remote server. Its EGRESS_DESTINATION can now specify the remote sever by FQDN. Networking NODE IP1 EGRESS ROUTER POD IP1 EGRESS SERVICE INTERNAL-IP:8080 EXTERNAL SERVICE Whitelist: IP1 POD POD POD ... - name: EGRESS_DESTINATION value: | 80 tcp my.example.com 8080 tcp 5.6.7.8 80 8443 tcp your.example.com 443 13.14.15.16 ...
  35. 35. Feature(s): Document and test a supported way of expanding the serviceNetwork Description: Provide a supported way of growing the service network address range in a multi-node environment to a larger address space. For example: serviceNetworkCIDR: 172.30.0.0/24 Note: This DOES NOT cover migration to a different range, JUST the increase of an existing range. Networking 1. Update the master-config.yaml to change the serviceNetworkCIDR to 172.30.0.0/16 2. Delete the default clusternetwork object on the master: # oc delete clusternetwork default 1. Restart the master API service and the controller service 2. Update the ansible inventory file to match the change in (1) and redeploy the cluster 3. Evacuate the node one by one and restart the iptables and atomic-openshift-node services How it works: 172.30.0.0/16
  36. 36. Feature(s): Improved OCP+OSP integration w/ Kuryr Description: Provide best practice out-of-the-box OCP+OSP integration. GA in OSP13. TP in OCP 3.10. Benefits Provided: ● Remove double-encapsulation ● Direct use of rich shared services provided by the underlying OSP cloud: ○ LBaaS, FWaaS, DNSaaS, … ○ Immediate compliance with Neutron plugins ● OSP’s tenant isolation becomes directly effective on OpenShift, as well ● Bare metal provisioning and management via Ironic Networking Enabling technology: Kuryr Tech Preview
  37. 37. Feature(s) : Reference Architecture Implementation Guides Release: ocpsupplemental-3.10 (4-6 weeks after 3.10 GA) Portfolio Integration Description: Reference Architecture Implementation guides will now be part of the OpenShift product documentation (https://docs.openshift.com) Existing Reference Architecture Implementation Guides cover deploying: ● OpenShift 3.9 on Red Hat OpenStack Platform 10 (RHOSP) ● OpenShift 3.9 on Amazon Web Services (AWS) ● OpenShift 3.9 on Microsoft Azure ● OpenShift 3.9 on VMware vSphere ● OpenShift 3.9 on Red Hat Virtualization 4 (RHV) ● OpenShift 3.9 on Google Cloud Platform (GCP) Blog: OpenShift Container Platform Reference Architecture Implementation Guides
  38. 38. Feature(s) : Specify whitelist cipher suite for etcd Security Description: Users now have the ability to optionally whitelist cipher suites for use with etcd in order to meet security policies. How it Works: ● Configure etcd to add --cipher-suites flag with the desired cipher suite ● Restart etcd, apiserver, controllers, etc ● TLS handshake fails when client hello is requested with invalid cipher suites. ● If empty, Go auto-populates the list.
  39. 39. Feature(s) : Control Sharing the PID namespace between containers Security Description: Use this feature to configure cooperating containers in a pod, such as a log handler sidecar container, or to troubleshoot container images that don’t include debugging utilities like a shell. How it Works: ● The feature gate PodShareProcessNamespace is set to false by default ● Set 'feature-gates=PodShareProcessNamespace=true' in apiserver, controllers and kubelet ● Restart apiserver, controller and node service ● Create a pod with spec "shareProcessNamespace: true" ● oc create -f <pod spec file> Caveats: When the pid namespace is shared between containers ● Sidecar containers are not isolated ● Environment variables are now visible to all other processes ● Any "kill all" semantics used within the process are now broken ● Exec processes from other containers will now show up pods/share-process-namespace.yaml Tech Preview
  40. 40. Feature(s) : Router Service Account no longer needs access to secrets Security Description: The router service account no longer needs permission to read all secrets. This improves security, as previously, if the router were compromised it could then read all of the most sensitive data in the cluster. How it Works: ● When you create an ingress object, a corresponding route object is created. ● If an ingress object is modified, a changed secret should take effect soon after ● If an ingress object is deleted, a route that was created for it will be deleted SERVICE POD POD ROUTER POD EXTERNAL TRAFFIC INTERNAL TRAFFIC
  41. 41. Feature(s): Container Storage Interface (CSI) Description: Introduce CSI sub-system as tech preview in 3.10 • External Attacher • External Provisioner • Driver registrar • CSI Drivers shipped: None (use external/upstream) Storage How it Works • Create a new project where the CSI components will run and a new service account that will run the components • Create the Deployment with the external CSI attacher and provisioner and DaemonSet with the CSI driver • Create a StorageClass for the new storage entity • Create a PVC with the new StorageClass • See: https://github.com/openshift/openshift- docs/blob/master/install_config/persistent_storage/per sistent_storage_csi.adoc Dev Preview
  42. 42. Feature(s): New Storage Provisioners Description: New Storage Provisioners (external provisioners) added as Tech Preview with 3.10 • CephFS Storage How it Works • Use OpenShift Ansible installer openshift_provisioners role • Set the provisioner to be installed and started as true <After the provisioner install and startup is completed> • Create a Storage Class for the storage entity • Create a pod with a PVC/claim with the Storage Class Tech Preview
  43. 43. Feature(s): CNS Rebranded as RHOCS Red Hat OpenShift Container Storage (RHOCS) Description: CNS is now OpenShift Container Storage (OCS or RHOCS) Like before two deployment Mode of RHOCS ○ Converged Mode ■ On top of OCP (a.k.a CNS) ○ Independent Mode ■ Outside of OCP (a.k.a CRS) ● Existing SKU’s, no change Storage Why? • Confusion with CNS & CRS terms • CNS too generic and not convey Red Hat • Reflects strong alignment and integration with OpenShift Container Platform • OpenShift is one complete platform
  44. 44. Feature(s): Arbiter Volume Support by OCS Description: New volume type ( 2 replica + 1 metadata brick) • ARBITER VOLUME is a 3-way replicated vol where every 3rd replica is metadata copies • Less disk space as 3 replica becomes near 2.1, • Better performance - avoid 3rd write penalty • Storage Optimization if underlying hardware layer have redundancies built in • In 3 DC/failover deployment, 3rd DC/domain capacity can be minimal • Arbiter nodes can be daisy chained or dedicated Storage How it Works • Arbiter volume can be created using the Heketi CLI or by updating the storageclass file. • Heketi CLI - heketi-cli volume create --size=4 --gluster- volumeoptions='user.heketi.arbiter true' • Storage Class VolumeOptions ○ volumeoptions: "user.heketi.arbiter true,user.heketi.average-file-size 1024"
  45. 45. Feature(s): RWO backed by Block PV’s for OCP Apps* Description: OCS RWO backed by block will support App workloads in addition to Infra workloads • Improved stability and characterization of block backed PV • Same OCS cluster can support both file and block backed PV’s * Prerequisite : Need RHEL 7.5.3 (target release date August 14, 2018) Storage How it Works • During Install set options for appropriate cluster ○ openshift_storage_glusterfs_block_deploy=true ○ openshift_storage_glusterfs_block_host_vol_create=true ○ openshift_storage_glusterfs_block_host_vol_size=100 ○ openshift_storage_glusterfs_block_storageclass=true • Existing OCS cluster (with block prov installed) ○ Configure multipath ○ Create Secret ○ Create a Storage Class with glusterblock as provisioner for app pv usage ○ Register with OCP ○ Create PVC Claims
  46. 46. Feature(s): OCS Heketi Topology & Configuration metrics available from OCP Description: OCS adds additional metrics for OCS topology, space and PV consumption to provide metrics (including consumption) through Prometheus or Query How it Works: ● Metrics available from PVC and Heketi end point ● User can now know OCS topology , Configuration & status ● Previously user can know PV size consumed/ allocated ● Use Prometheus or curl to view the added metrics Storage Metrics Added ● Heketi_cluster_count, Heketi_device_brick_count ● Heketi_device_count, Heketi_device_free ● Heketi_device_size, Heketi_device_used ● Heketi_volumes_count, Heketi_up, .. ● + 3.9 metrcis ● kubelet_volume_stats_capacity_bytes ● kubelet_volume_stats_inodes ● kubelet_volume_stats_inodes_free ● kubelet_volume_stats_inodes_used ● kubelet_volume_stats_used_bytes ....etc Example heketi_cluster_count 1 # HELP heketi_device_brick_count Number of bricks on device # TYPE heketi_device_brick_count gauge heketi_device_brick_count{cluster="d722bd0db2a60a387ba2a4 1dc426bdd",device="/dev/sdd",hostname="dhcp46- 1.lab.eng.blr.redhat.com"} 1 heketi_device_brick_count{cluster="d722bd0db2a60a387ba2a4 1dc426bdd",device="/dev/sdd",hostname="dhcp46-175.lab.eng
  47. 47. Metrics OpenShift Container Platform 3.10 ● Prometheus stays in Tech Preview ● Hawkular is still at the core of our metrics/monitoring solution OpenShift Container Platform 3.11 ● Prometheus Tech Preview evolves into a full cluster monitoring solution (see next slide). This is the GA of Prometheus ● Announce Deprecation of Hawkular. Hawkular still in the product. ● Announce Deprecation of OpenShift Provider for CFME OpenShift Container Platform 4.0 ● Hawkular completely replaced and remove with a Prometheus based monitoring solution Evolution Towards Prometheus
  48. 48. Metrics - Coming next Compute node-exporter Kubernetes Application kube-state-metrics /metrics Feature(s): ● Query and plot cluster metrics collected by Prometheus. ● Receive notifications from pre- packaged alerts, enabling owners to take corrective actions and start troubleshooting problems. ● View pre-packaged Grafana dashboards for etcd, cluster state, and many other aspects of cluster health. See what alerting rules and metrics are included, as well as other information about the new monitoring stack: https://pnt.redhat.com/pnt/p- 10078455/NDA_OpenShift...oring_FAQ.pdf OCP 3.11
  49. 49. CONTAINER CONTAINERCONTAINER CONTAINER CONTAINER Trusted Container OS The container host is the container engine ATOMIC HOST / RED HAT ENTERPRISE LINUX OCI CONTAINER RUNTIME & PACKAGING
  50. 50. ● Atomic Host deprecation notice, as Red Hat CoreOS will be the future immutable host option. ○ Atomic supported in 3.10 & 3.11 Storage ● Virtual data optimizer (VDO) for dm-level dedupe and compression. ● OverlayFS by default for new installs (overlay2) ○ Ensure ftype=1 for 7.3 and earlier ● Devicemapper continues to be supported and available for edge cases around POSIX ● LVM snapshots integrated with boot loader (boom) RHEL 7.5 Highlights OpenShift Container Platform 3.10 is supported on RHEL 7.4, 7.5 and Atomic Host 7.5+ Containers / Atomic ● Docker 1.13 ● Docker-latest deprecation ● RPM-OSTree package overrides Security ● Unprivileged mount namespace ● KASLR full support and enabled by default. ● Ansible remediation for OpenSCAP ● Improved SELinux labeling for cgroups (cgroup_seclabel)
  51. 51. CRI-O v1.10 Feature(s): CRI-O v1.10 Description: CRI-O is an OCI compliant implementation of the Kubernetes Container Runtime Interface. By design it provides only the runtime capabilities needed by the kubelet. CRI-O is designed to be part of Kubernetes and evolve in lock-step with the platform. CRI-O brings: ● A minimal and secure architecture ● Excellent scale and performance ● Ability to run any OCI / Docker image ● Familiar operational tooling and commands Improvements include: ● crictl CLI for debugging and troubleshooting ● Podman for image tagging & management ● Installer integration & fresh install time decision: openshift_use_crio=True ○ Not available for existing cluster upgrades Kubelet Storage Image RunCCNI Networking
  52. 52. Feature: Buildah 1.1 is available in RHEL 7.5 and adds support for multi-stage builds Description: Buildah is a daemon-less tool for building and modifying OCI / Docker images. ● Preserves existing dockerfile workflow and instructions ● Allows fine-grain control over image layers, the content, and commits ● Utilities on the container host can optionally be called for the build. ● Multi-stage builds via dockerfiles or native buildah commands ● Shares the underlying image and storage components with CRI-O Buildah v1.1 Generate new layers and/or run commands on existing layers Start from an existing image or from scratch Commit storage and generate the image manifest Deliver image to a local store or remote OCI / docker registry
  53. 53. Feature: Podman is now available as a technology preview. Description: A daemon-less CLI/API for running, managing, and debugging OCI containers and pods ● Fast and lightweight ● Leverages runC ● Provides a “docker-like” syntax for working with containers ● Remote management API via Varlink ● Provides systemd integration and advanced namespace isolation Podman Tech Preview kernel
  54. 54. Q2 CY2018 Q3 CY2018 OpenShift Container Platform 3.10 (July) ● Kubernetes 1.10 and CRI-O option ● Istio (Tech Preview) ● oc client for developers ● Golden Image Tooling and TLS bootstrapping ● Windows Server Containers (Dev Preview)) ● Prometheus Metrics and Alerts (Dev Preview) ● S3 Svc Broker OpenShift Online & Dedicated ● Dedicated self-service: RBAC, templates, LB, egress ● Dedicated encrypted storage, multi-AZ, Azure beta OpenShift Roadmap Q4 CY2018 Q1 CY2019 OpenShift Container Platform 3.11 (Sept/Oct) ● Kubernetes 1.11 and CRI-O default ● Infra monitoring ,alerting with SRE intelligence, Node Problem Detector ● Etcd, Prometheus, and Vault Operators - Tech preview ● Operator Certification Program and Red Hat Fuse Operator ● Autoscaler for AWS and P-SAP features ● Metering and Chargeback (Tech Preview) ● HPA Custom Metric ● Tech preview of OLM ● New web console for developers and cluster admins ● Ansible Galaxy ASB support ● CNV (Tech Preview) ● OVN (Dev Preview for Windows) ● FIPS and other Security PAGs OpenShift Online & Dedicated ● OpenShift Online automated updates for OS ● Chargeback for OpenShift Online Starter OpenShift Container Platform 4.0 (Dec/Jan) ● Kubernetes 1.12 and CRI-O default ● Converged Platform ● Full Stack Automated Installer ○ AWS, RHEL, OSP (tentative) ● Over the Air Updates ● RHCC integrated experience ● Windows Containers Tech Preview ● Easy/Trackable Evaluations ● Red Hat CoreOS Container Linux with Ignition Automations ● Cluster Registry ● HPA metrics from Prometheus OpenShift Online & Dedicated ● Cluster Operator driven installs ● Self-Service Dedicated User Experience OpenShift Container Platform 4.1 (March) ● Kubernetes 1.13 and CRI-O default ● Full Stack Automation ○ GCP, VMware ● Istio GA ● Mobile 5.x ● Serverless (Tech Preview) ● RHCC for non-container content ● Integrated Quay (Tech Preview) ● Idling Controller ● Federated Ingress and Workload Policy ● OVN GA ● Che (Tech Preview) OpenShift Online & Dedicated ● OpenShift.io on Dedicated (Tech Preview)
  55. 55. Questions

×