In this deck, I cover all the new exciting security feature we have in both gateway and APIC.
We are excited about the new features, and how they can be used to help protect the customer's deployment environment.
This is covered during the tech conference. It covers high-level security. The best practice for deployment for gateway (what was known as last-mile) is covered at the end.
IBM DataPower Gateway appliances are used in a variety of user scenarios to enable security, control, integration and optimized access for a range of workloads including Mobile, Web, API, B2B, Web Services and SOA. This presentation from the IBM DataPower team provides an in-depth look at each use case.
Enterprise messaging and IBM MQ is a critical part of any system, this session shows you how MQ is rapidly evolving to meet your needs. Irrespective of your platform or environment, this session introduces many of the updates to MQ in 2019 and 2020, whether that's in administration, building fault tolerant, scalable messaging solutions, or securing your systems.
In this deck, I cover all the new exciting security feature we have in both gateway and APIC.
We are excited about the new features, and how they can be used to help protect the customer's deployment environment.
This is covered during the tech conference. It covers high-level security. The best practice for deployment for gateway (what was known as last-mile) is covered at the end.
IBM DataPower Gateway appliances are used in a variety of user scenarios to enable security, control, integration and optimized access for a range of workloads including Mobile, Web, API, B2B, Web Services and SOA. This presentation from the IBM DataPower team provides an in-depth look at each use case.
Enterprise messaging and IBM MQ is a critical part of any system, this session shows you how MQ is rapidly evolving to meet your needs. Irrespective of your platform or environment, this session introduces many of the updates to MQ in 2019 and 2020, whether that's in administration, building fault tolerant, scalable messaging solutions, or securing your systems.
IBM MQ: An Introduction to Using and Developing with MQ Publish/SubscribeDavid Ware
IBM MQ allows application programmers to use the publish/subscribe application model with ease. This session takes you through the fundamental publish/subscribe concepts and how they relate to IBM MQ. Covering aspects of system design, configuration and application programming, this session is essential for all users looking to adopt publish/subscribe with IBM MQ.
This covers security with APIc/gateway. It goes over high-level concepts and what IBM APIc can offer, this covers 2018, and v10 of the product
Note: this is from a presentation from a year or so ago, with some updates to the link
IBM MQ V8 introduced a number of new security features. This session will take you through the two major features, Multiple Certificates and Connection Authentication. In IBM MQ V8 you are no longer restricted to only using one certificate for you queue manager with an IBM enforced label. Now you can have your own certificate labels and can allocated a different certificate for any specific channel. How about authentication? Finding that digital certificates are more security than your need? Want some authentication without having to write a security exit. IBM MQ V8 gives you built-in user ID and password validation. Other security features related to the MQ CHLAUTH rules are covered in a separate session
These charts provide a high-level overview of IIB HA topologies:
• Comparison of active/active and active/passive HA
• Solutions for active/passive HA failover with IBM Integration Bus
• Solutions for active/active processing with IBM Integration Bus
• Adding Global Cache to active/active processing
• Combining all of the above
Only HTTP and JMS (MQ) workloads are shown
OpenShift is Red Hat's Platform-as-a-Service (PaaS) that lets developers quickly develop, host, and scale Docker container-based applications. OpenShift enables a uniform and standardised approach to container management across all hosting options including AWS/EC2 and other private/public cloud and on/off-premise variants. At this session, you will learn how Red Hat's enterprise clients are using OpenShift to enable their digital transformation initiatives. Examples will cover how realising a hybrid cloud strategy can simplify and reduce the risk of migrating and transitioning application workloads to containers in the cloud.
Alex Smith, Solutions Architect, Amazon Web Services, ASEAN
Stephen Bylo, Senior Solution Architect, Red Hat Asia Pacific Pte Ltd
Secure Access – Anywhere by Prisma, PaloAltoPrime Infoserv
The purpose of the session is to ensure security on the rapidly scaled work from Home situations during the COVID-19 outbreak. The objective is to ensure that they can securely and rapidly connect to all of their applications, including SaaS, cloud, and data-center applications.
The session will be delivered by Mohammad Faizan Sheikh, Channel Systems Engineer, India & SAARC for Palo Alto Networks..
A Deep Dive into the Liberty Buildpack on IBM BlueMix Rohit Kelapure
This talk goes into the details and mechanics of how the Liberty buildpack deploys an application into the IBM BlueMix Cloud Foundry. It also explores how the Cloud Foundry runtime drives the Liberty buildpack code and what the Liberty buildpack code in Cloud Foundry does to run an application in the cloud environment. This talk touches on the restrictions that Cloud Foundry and the Liberty runtime imposes on applications running in Cloud Foundry. Developers attending this talk get deep insight into the why, what, how, and when of the Liberty buildpack ruby code, enabling them to write applications faster and optimized for the Liberty runtime in IBM BlueMix.
IBM MQ: An Introduction to Using and Developing with MQ Publish/SubscribeDavid Ware
IBM MQ allows application programmers to use the publish/subscribe application model with ease. This session takes you through the fundamental publish/subscribe concepts and how they relate to IBM MQ. Covering aspects of system design, configuration and application programming, this session is essential for all users looking to adopt publish/subscribe with IBM MQ.
This covers security with APIc/gateway. It goes over high-level concepts and what IBM APIc can offer, this covers 2018, and v10 of the product
Note: this is from a presentation from a year or so ago, with some updates to the link
IBM MQ V8 introduced a number of new security features. This session will take you through the two major features, Multiple Certificates and Connection Authentication. In IBM MQ V8 you are no longer restricted to only using one certificate for you queue manager with an IBM enforced label. Now you can have your own certificate labels and can allocated a different certificate for any specific channel. How about authentication? Finding that digital certificates are more security than your need? Want some authentication without having to write a security exit. IBM MQ V8 gives you built-in user ID and password validation. Other security features related to the MQ CHLAUTH rules are covered in a separate session
These charts provide a high-level overview of IIB HA topologies:
• Comparison of active/active and active/passive HA
• Solutions for active/passive HA failover with IBM Integration Bus
• Solutions for active/active processing with IBM Integration Bus
• Adding Global Cache to active/active processing
• Combining all of the above
Only HTTP and JMS (MQ) workloads are shown
OpenShift is Red Hat's Platform-as-a-Service (PaaS) that lets developers quickly develop, host, and scale Docker container-based applications. OpenShift enables a uniform and standardised approach to container management across all hosting options including AWS/EC2 and other private/public cloud and on/off-premise variants. At this session, you will learn how Red Hat's enterprise clients are using OpenShift to enable their digital transformation initiatives. Examples will cover how realising a hybrid cloud strategy can simplify and reduce the risk of migrating and transitioning application workloads to containers in the cloud.
Alex Smith, Solutions Architect, Amazon Web Services, ASEAN
Stephen Bylo, Senior Solution Architect, Red Hat Asia Pacific Pte Ltd
Secure Access – Anywhere by Prisma, PaloAltoPrime Infoserv
The purpose of the session is to ensure security on the rapidly scaled work from Home situations during the COVID-19 outbreak. The objective is to ensure that they can securely and rapidly connect to all of their applications, including SaaS, cloud, and data-center applications.
The session will be delivered by Mohammad Faizan Sheikh, Channel Systems Engineer, India & SAARC for Palo Alto Networks..
A Deep Dive into the Liberty Buildpack on IBM BlueMix Rohit Kelapure
This talk goes into the details and mechanics of how the Liberty buildpack deploys an application into the IBM BlueMix Cloud Foundry. It also explores how the Cloud Foundry runtime drives the Liberty buildpack code and what the Liberty buildpack code in Cloud Foundry does to run an application in the cloud environment. This talk touches on the restrictions that Cloud Foundry and the Liberty runtime imposes on applications running in Cloud Foundry. Developers attending this talk get deep insight into the why, what, how, and when of the Liberty buildpack ruby code, enabling them to write applications faster and optimized for the Liberty runtime in IBM BlueMix.
Introduction to Containers - AWS Startup Day Johannesburg.pdfAmazon Web Services
In this session, we cover all the options for running containers on AWS. This will include an intro of container concepts, and an overview to different services like ECS, EKS, ECR and Fargate. We cover topics like: how to choose the right orchestration platform for your workload, some different tools that are out there to make the process easier, and how to find more information and support as you work.
IBM BP Session - Multiple CLoud Paks and Cloud Paks Foundational Services.pptxGeorg Ember
Diese Präsentation beinhaltet Erfahrungen, Empfehlungen und Planungs-Gedanken, die man beachten sollte, wenn man multiple IBM Cloud Paks auf der Container Platform OpenShift installieren / deployen möchte. Es beschreibt die Grundlagen zu "common services", auch "foundational services" genannt, die als Basis-Services die Lauffähigkeit dieser Cloud Paks auf OpenShift erläutert und wie man Cloud Paks auch logisch trennen kann auf OpenShift worker nodes über taints und node selectors.
Recent momentum around the evolution of Containers are gradually increase in last two years.Containers virtualize an OS and applications running in each container believe that they have full access to their very own copy of that OS. This is analogous to what VMs do when they virtualize at a lower level, the hardware. In the case of containers, it’s the OS that does the virtualization and maintains the illusion.
Recent past many software companies have quickly adopted container technologies, including Docker Containers, aware of the threat and advantage of the approach. For example, Linux companies have also jumped into the ground, seeing as this as an opportunity to grow the Linux market. Also Microsoft is going to add features to support containers and VMware have made efforts in integrating support for Docker into virtual machine technology.
Recent momentum around the evolution of Containers are gradually increase in last two years.Containers virtualize an OS and applications running in each container believe that they have full access to their very own copy of that OS. This is analogous to what VMs do when they virtualize at a lower level, the hardware. In the case of containers, it’s the OS that does the virtualization and maintains the illusion.
Recent past many software companies have quickly adopted container technologies, including Docker Containers, aware of the threat and advantage of the approach. For example, Linux companies have also jumped into the ground, seeing as this as an opportunity to grow the Linux market. Also Microsoft is going to add features to support containers and VMware have made efforts in integrating support for Docker into virtual machine technology.
Effective administration of IBM Integration Bus - Sanjay NagchowdhuryKaren Broughton-Mabbitt
The latest fix pack releases of IBM Integration Bus (IIB) include many features that make administering the product easier. Discover the right ways to effectively administer and operate the product, and learn tips and tricks that should be in every IBM Integration Bus administrator's toolbox. We will also demonstrate the ability to consolidate information from multiple IIB installations using the ELK stack and LogMet on IBM Bluemix.
Recent momentum around the evolution of Containers are gradually increase in last two years.Containers virtualize an OS and applications running in each container believe that they have full access to their very own copy of that OS. This is analogous to what VMs do when they virtualize at a lower level, the hardware. In the case of containers, it’s the OS that does the virtualization and maintains the illusion.
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...MongoDB
MongoDB Kubernetes operator and MongoDB Open Service Broker are ready for production operations. Learn about how MongoDB can be used with the most popular container orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications. A demo will show you how easy it is to enable MongoDB clusters as an External Service using the Open Service Broker API for MongoDB
- TeamSQL AWS Architecture
- VPC Introduction (Public, private subnets) and Demo
- EC2 Introduction and Demo
- RDS Introduction and Demo
- Introduction to Cloudformation
- A simple Cloudformation Script and make it live (Creating EC2 with Cloudformation)
- Deleting Cloudformation Stack
- More advanced Cloudformation Script and make it live
(Cloudformation parameters, VPC, public, private subnets, RDS, ElasticBeanstalk, ElastiCache)
- Updating Cloudformation Stack
- Hands on - Advanced Cloudformation Script
The OSGi R5 Enterprise release is available now from www.osgi.org (at this moment as a draft, final soon). This presentation walks through what's new in this specification, what to use it for and where to get it.
Cloud foundry Docker Openstack - Leading Open Source TriumvirateAnimesh Singh
OpenStack, Docker, and Cloud Foundry are the three most popular open source projects according to a recent cloud software survey. Docker has taken the cloud world by storm as a revolutionary way to not only run isolated application containers, but also to package them. But how does Docker fit into the paradigm of IaaS and PaaS? More specifically, how does it integrate with OpenStack and Cloud Foundry, the world's most popular infrastructure and platform service implementations? OpenStack, Docker, and Cloud Foundry are the three most popular open source projects according to a recent cloud software survey. Docker has taken the cloud world by storm as a revolutionary way to not only run isolated application containers, but also to package them. But how does Docker fit into the paradigm of IaaS and PaaS? More specifically, how does it integrate with OpenStack and Cloud Foundry, the world's most popular infrastructure and platform service implementations?
These charts from our OpenStack Summit talk Vancouver talk how the three leading open source cloud technologies are evolving to work together to support next generation workloads!
Docker containers have been making inroads into Windows and Azure world. Docker has now replaced the traditional Azure IaaS & PaaS services, offering superior container versions which are more responsive, cost effective, and agile. In this session for Charlotte Azure User Group, we will take an in-depth look at the intersection of Docker and Azure, and how Docker is empowering next gen Azure services.
Here's the link to CAG meetup for the event - https://www.meetup.com/Charlotte-Microsoft-Azure/events/fpftgmyxjbjb/
3 years ago, Meetic chose to rebuild it's backend architecture using microservices and an event driven strategy. As we where moving along our old legacy application, testing features became gradually a pain, especially when those features rely on multiple changes across multiple components. Whatever the number of application you manage, unit testing is easy, as well as functional testing on a microservice. A good gherkin framework and a set of docker container can do the job. The real challenge is set in end-to-end testing even more when a feature can involve up to 60 different components.
To solve that issue, Meetic is building a Kubernetes strategy around testing. To do such a thing we need to :
- Be able to generate a docker container for each pull-request on any component of the stack
- Be able to create a full testing environment in the simplest way
- Be able to launch automated test on this newly created environment
- Have a clean-up process to destroy testing environment after tests To separate the various testing environment, we chose to use Kubernetes Namespaces each containing a variant of the Meetic stack. But when it comes to Kubernetes, managing multiple namespaces can be hard. Yaml configuration files need to be shared in a way that each people / automated job can access to them and modify them without impacting others.
This is typically why Meetic chose to develop it's own tool to manage namespace through a cli tool, or a REST API on which we can plug a friendly UI.
In this talk we will tell you the story of our CI/CD evolution to satisfy the need to create a docker container for each new pull request. And we will show you how to make end-to-end testing easier using Blackbeard, the tool we developed to handle the need to manage namespaces inspired by Helm.
Serverless frameworks are changing the way we do computing. In open source container world, Kubernetes is playing a pivotal role in manifesting this. This presentation will go deep into various features of Kubernetes to create serverless functions.
Also includes a comparative study of various serverless frameworks such as Kubeless, Fission and Funktion are available in open source world. Will conclude with an implementation demo and some real world use cases.
Presented in serverless summit 2017: www.inserverless.com
Kubernetes for FaaS (Function as a Service) - Serverless evolution, some basic constructs, kubenetes features, comparisons - from Serverless conference 2017 Bangalore.
Similar to IBM Cloud Pak for Integration 2020.2.1 installation (20)
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
3. IBM Cloud Pak for Integration
Cluster Node types
OpenShift
Master
Worker
IBM Cloud Pak for Integration
Theoretical minimum cluster
Master
Worker +
Services
A kubernetes Node is a VM or bare metal machine which is part of a
Kubernetes cluster.
OpenShift Nodes
• Master Nodes run the services that control the cluster including the
etcd database that stores the current state of the cluster.
• Worker Nodes run OpenShift Platform Services, Common Services,
and Integration Services. If the IBM Cloud Pak for Integration is
running in a cluster with other workloads then these also run on the
worker nodes.
Common Services
The common services run on OpenShift worker nodes. They are grouped
into Master, Proxy, and Management services, and can all be run on the
same node, run on separate nodes, and run on multiple nodes for HA.
• Master services IAM, Catalog, helm/tiller
• Proxy services ingress controller for app compatibility with OpenShift
• Management services Metering, Logging, Monitoring
The minimum theoretical configuration is one Master node and one
Worker node. This would not be Highly available and is unlikely to have
enough CPU to be usable for more than limited demos.
Note: There are other more obscure types such as dedicated etcd nodes and vulnerability advisor nodes
services but these are beyond the scope of this presentation
OpenShift
Platform
Services
Common
Services
Logical Architecture
4. Master
• To make the solution fully Highly Available, each component
must be deployed in HA topology.
• Master Nodes contain software that uses a quorum* paradigm
for high availability and so these must be deployed as an odd
number of nodes. Typically either 3 or 5 masters are used in a HA
cluster depending on the size of the cluster and the type of load.
• Common services do not require a quorum so each group of
common services needs to be assigned to 2 or more worker
nodes for HA.
• Worker Nodes run Integration Services. Depending on the
integration services required 2 or more worker nodes may be
needed for HA. (More detail in subsequent slides)
• A topology that is often used is:
• 3 Master nodes
• 2 Worker nodes for Common Services
• 3(+) Worker nodes for Integration Services
Master Master
Common
Services
Node
Common
Services
Node
Worker Worker
IBM Cloud Pak for Integration HA Cluster
Master
IBM Cloud Pak for Integration minimal HA
Cluster
Master Master
Services
+
Worker
Services
+
Worker
Logical Architecture
8. File System Requirements
The following recommended storage providers that have been validated across all the components of IBM
Cloud Pak for Integration:
•OpenShift Container Storage version 4.x from version 4.2 or higher
•IBM Cloud Block storage and IBM Cloud File storage
Other storage providers are also recommended for specific components. Refer to the notes for that
component in the following table.
9. Integration Component Sizing Reference
For individual integration component sizing reference, you may refer to the lab performance benchmark as
guidance based on your design requirements:
For App Connect Enterprise:
https://www.ibm.com/support/pages/ibm-app-connect-enterprise-v11-performance-reports
For MQ:
https://ibm-messaging.github.io/mqperf/MQ_for_xLinux_V910_Performance.pdf
For Virtual DataPower:
https://www.slideshare.net/ibmdatapower/datapower-api-gateway-performance-benchmarks-135724582
10. Part#2:
Installation
Adding online catalog sources to a cluster
Mirroring operators to a restricted environment
IBM Entitled Registry entitlement keys
Platform Navigator deployment
11. Adding online catalog sources to a cluster
When your cluster is connected to the internet, The IBM Cloud Pak for Integration (CP4I) can be installed
by adding the IBM Operator Catalog and the IBM Common Services Catalog to your cluster and using the
Operator Lifecycle Manager (OLM) to install the operators.
Note: This information only applies to clusters that are connected to the internet.
You must be a cluster administrator to add CatalogSource objects to a cluster.
You can add CatalogSource objects to your cluster using the Red Hat OpenShift web console, or by using
the oc command-line tool.
12. Adding online catalog sources to a cluster
To add CatalogSource objects using the OpenShift web console:
1. Add the IBM Common Services operators to the list of installable operators
Click the plus icon. You see the Import YAML dialog box.
Paste the following resource definition in the dialog box.
Click Create
13. Adding online catalog sources to a cluster
2. Add the IBM operators to the list of installable operators
Click the plus icon. You see the Import YAML dialog box.
Paste the following resource definition in the dialog box:
Click Create
14. Adding online catalog sources to a cluster
2. Add the IBM operators to the list of installable operators
Click the plus icon. You see the Import YAML dialog box.
Paste the following resource definition in the dialog box:
Click Create
15. Adding online catalog sources to a cluster
If not using Web Console, you may add CatalogSource objects using the CLI:
Copy the resource definitions from above into local files on your computer.
Run oc apply -f <filename> for each resource definition.
16. Mirroring operators to a restricted environment
When you cluster is in a restricted environment that is not connected to the internet, The IBM Cloud Pak for
Integration (CP4I) can be installed by mirroring the CP4I operators to a registry within the restricted environment.
Mirroring is performed using the IBM CASE packages for each operator.
What is a CASE?
There is one CASE package for each component and dependent component of CP4I. The CASE packages contain
metadata about each component, including the container images required to deploy the component and information
about their dependencies. Each CASE package also contains the required scripts to mirror images to a private registry,
and to configure the target cluster to use the private registry as a mirror.
For more information on the CASE packaging format, see https://github.com/IBM/case.
17. Mirroring operators to a restricted environment
When you cluster is in a restricted environment that is not connected to the internet, The IBM Cloud Pak for
Integration (CP4I) can be installed by mirroring the CP4I operators to a registry within the restricted environment.
Mirroring is performed using the IBM CASE packages for each operator.
What is a CASE?
There is one CASE package for each component and dependent component of CP4I. The CASE packages contain
metadata about each component, including the container images required to deploy the component and information
about their dependencies. Each CASE package also contains the required scripts to mirror images to a private registry,
and to configure the target cluster to use the private registry as a mirror.
For more information on the CASE packaging format, see https://github.com/IBM/case.
18. Mirroring operators to a restricted environment
After the images are mirrored to the target registry, CatalogSource objects can be added to the cluster for the mirrored
operators.
Prerequisites
Prepare a Docker registry
Prepare a bastion host
Create environment variables for the installer and image inventory
Download the installer and image inventory
Log in to OpenShift as a cluster administrator
Create a Kubernetes namespace
Mirror the images and configure the cluster
Create the catalog source
19. Mirroring operators to a restricted environment
Prerequisites
An OpenShift 4.4 cluster must be installed.
A Docker registry must be available.
A bastion server must be configured
20. Mirroring operators to a restricted environment
Prepare a Docker registry
A local Docker registry is used to store all images in your restricted environment. You must create such a registry and
must ensure that it meets the following requirements:
Supports Docker Manifest V2, Schema 2.
Is accessible from both the bastion server and your OpenShift cluster nodes.
Has the username and password of a user who can write to the target registry from the bastion host.
Has the username and password of a user who can read from the target registry that is on the OpenShift cluster nodes.
Allows path separators in the image name.
An example of a simple registry is included in Creating a mirror registry for installation in a restricted network in the
OpenShift documentation.
Note: The internal Red Hat OpenShift registry is not compliant with Docker Manifest V2, Schema 2, and therefore is not
suitable for use as a private registry for restricted environments.
Verify that you:
Have credentials of a user who can write and create repositories. The bastion host uses these credentials.
Have credentials of a user who can read all repositories. The OpenShift cluster uses these credentials.
21. Mirroring operators to a restricted environment
Prepare a bastion host
Prepare a bastion host that can access the OpenShift cluster, the local Docker registry, and the internet.
The bastion host must be on a Linux x86_64 platform with any operating system that the IBM Cloud Pak CLI and the
OpenShift CLI support.
Complete these steps on your bastion node:
Install OpenSSL version 1.11.1 or higher.
Install Docker or Podman on the bastion node.
22. Mirroring operators to a restricted environment
To install Docker, run these commands:
To install Podman, see Podman Installation Instructions.
Example:
23. Mirroring operators to a restricted environment
1. To Install the IBM Cloud Pak CLI. Install the latest version of the binary file for your platform.
a. Download the binary file.
b. Extract the binary file.
c. Run the following commands to modify and move the file.
d. Confirm that cloudctl is installed:
e. Install the oc OpenShift CLI tool. Create a directory that serves as the offline store.
Following is an example directory. This example is used in the subsequent steps.
24. Mirroring operators to a restricted environment
Create environment variables for the installer and image inventory
Create the following environment variables with the installer image name and the image inventory.
Using this CASE archive will mirror the whole Cloud Pak for Integration. To mirror part of the Cloud Pak, use the CASE
archive and inventory item for an individual component, and repeat the process for each component you want to be
available in your restricted environment.
25. Mirroring operators to a restricted environment
CASE files for IBM components can be found in the IBM CASE repository.
The CP4I component CASEs available for mirroring are:
26. Mirroring operators to a restricted environment
Download the installer and image inventory
Download the installer and image inventory to the bastion host.
This step will download the selected CASE file and its dependecies to the local machine. It will also produce CSV files
listing the images and helm charts included in each CASE file. The CP4I components do not include any helm charts.
Note: The CSV files listing the images, combined with your IBM Entitled Registry entitlement key, can be used to
download or mirror the images manually for performing security scans before deployment on a cluster.
One CSV file is created for each component and required dependency. After logging in your container tool to the
entitled registry using the username cp and your entitlement key, a shell script can be used to process all images from
all components:
27. Mirroring operators to a restricted environment
Log in to OpenShift cluster as a cluster administrator
Following is an example command to log in to the OpenShift cluster:
Create a Kubernetes namespace
Create an environment variable with a namespace to install, then create the namespace.
Mirror the images and configure the cluster
Complete these steps to mirror the images and configure your cluster:
Store authentication credentials for the IBM Entitled Registry.
See IBM Entitled Registry entitlement keys for how to obtain your entitlement key.
After obtaining your entitlement key, run the following command to configure credentials for the IBM Entitled Registry:
The command stores and caches the registry credentials in a file on your file system in the $HOME/.airgap/secrets location.
28. Mirroring operators to a restricted environment
Create environment variables with the local Docker registry connection information.
Configure a global image pull secret and ImageContentSourcePolicy.
To enable your disconnected cluster to access images from your private registry, it must be configured to use your
private registry as a mirror of the images hosted in the online registries, and to be able to access those images.
This step configures an ImageContentSourcePolicy for the images listed in the component CASEs. See Configuring image registry repository mirroring in the Red Hat OpenShift
documentation for more details.
This step also configures the global cluster pull secret for the cluster to allow it to access the private registry. See Adding the registry to your pull secret in the Red Hat OpenShift
documentation for more details.
Note: In OpenShift version 4.4, this step performs a rolling restart of all cluster nodes. The cluster resources might be unavailable until the time the
new ImageContentSourcePolicy and global cluster pull secret is applied.
29. Mirroring operators to a restricted environment
Verify that the ImageContentSourcePolicy resource is created.
Verify your cluster node status.
After the ImageContentsourcePolicy and global image pull secret are applied, you might see the node status
as Ready, Scheduling, or Disabled. Wait until all the nodes show a Ready status.
Configure an authentication secret for the local Docker registry.
Note: This step needs to be done only one time.
The command stores and caches the registry credentials in a file on your file system in
the $HOME/.airgap/secrets location.
30. Mirroring operators to a restricted environment
Mirror the images to the local registry.
This command calls the oc image mirror command to mirror images from the online registry to the private registry.
Note: If you are using an insecure registry, you must also add the local registry to the insecureRegistries list for your
cluster.
31. Mirroring operators to a restricted environment
Create the CatalogSource
CP4I can be installed by adding the CatalogSource for the mirrored operators to your cluster and using OLM to install
the operators.
Create a catalog source.
This command adds the CatalogSource for the components to your cluster, so the cluster can access them from the
private registry.
Verify that the CatalogSource for common services installer operator is created.
32. IBM Entitled Registry entitlement keys
To run software from the IBM Entitled Registry, you must supply your entitlement key as a Kubernetes pull secret. If
you use the secret name ibm-entitlement-key, CP4I operators will automatically use it to pull images from the IBM
Entitled Registry.
Obtaining an entitlement key
Obtain an Entitlement key from IBM Container Library.
Click Get an entitlement key.
Copy the entitlement key presented to a safe place for use later.
(Optional) Verify the validity of the key by logging in to the IBM Entitled Registry using a container tool.
33. IBM Entitled Registry entitlement keys
Adding an entitlement key to a namespace
Note: This information applies to clusters using the IBM Entitled Registry only, if you are mirroring the operators to a
private registry (for example, in restricted environments), a global pull secret will be used for registry access,
configured by the mirroring process.
Use standard Kubernetes tools to add a pull secret containing your entitlement key to the installation namespace of
your components. You will need to create the secret in every namespace you want to install CP4I components.
Create a docker registry secret using the following command:
You can also use the kubectl tool instead of the oc tool to create the secret.
34. Platform Navigator deployment
The IBM Cloud Pak for Integration can be deployed using the Red Hat OpenShift web console, or the Red Hat OpenShift
CLI.
Requirements
You must meet the following dependencies before you deploy the Platform Navigator. A Cluster Administrator should
carry out these tasks.
A project must exist for this instance.
The IBM Cloud Pak for Integration Platform Navigator operator must be installed either at a Cluster scope or in the
project you want to deploy the platform navigator into. See Installation for more information.
If you are using the IBM Entitled Registry, a pull secret must exist in the namespace containing an entitlement key.
See IBM Entitled Registry entitlement keys.
Before deploying the Platform Navigator you must install the operator. See Installation.
35. Installing Operator
The IBM Cloud Pak for Integration (CP4I) is delivered as operators that are installed and managed using the Operator
Lifecycle Manager (OLM) within Red Hat OpenShift. To install CP4I, add the OLM Catalog Sources for IBM components,
install the operators using OLM, and then create a Platform Navigator custom resource.
What is an operator?
An operator extends a Kubernetes cluster by adding and managing additional resource types to the Kubernetes API.
This allows for the installation and management of software using standard Kubernetes tools
Making CP4I operators available to a cluster
To make the CP4I operators available to a cluster, use OLM catalog sources to refer to the location of the CP4I
operators.
The OLM catalog sources from IBM components can be added directly to clusters connected to the internet. Follow
the steps in Adding online catalog sources to a cluster.
For clusters not connected to the internet, the software must be mirrored to a registry within the restricted
network. Follow the steps in Mirroring operators to a restricted environment.
36. Installing Operator
Installing the CP4I operators
You can install all of the CP4I operators at once by using the Cloud Pak for Integration operator, or install a subset of
operators by selecting and installing only the operators you want to use on your cluster. When installing an operator,
OLM will automatically install any required dependencies.
Install CP4I operators using the Red Hat OpenShift Operator Hub, located in the left hand menu of the OpenShift
console under the Operators menu item, or the oc command-line tool.
For detailed instructions on how to install an operator, see Adding Operators to a cluster in the Red Hat OpenShift
documentation.
37. Installing Operator
The operators available to install are:
IBM App Connect
Provides application integration capabilities and a means to easily create and export flows that run in an App Connect
instance.
IBM Aspera HSTS
Provides high speed transfer integration capabilities.
IBM Datapower Gateway
Provides gateway capabilities.
IBM Event Streams
Provides IBM Event Streams capabilities.
IBM MQ
Provides messaging capabilities.
38. Installing Operator
The operators available to install are:
Cloud Pak for Integration
Top level CP4I operator that will install all other CP4I operators automatically. Use this to install the whole Cloud Pak in
one operation.
IBM Cloud Pak for Integration Platform Navigator
Provides a dashboard and central services for other CP4I capabilities. Should be installed for most CP4I installations.
IBM Cloud Pak for Integration Asset Repository
Stores, manages, retrieves and searches for integration assets for use within the IBM Cloud Pak for Integration and its
capabilities.
IBM Cloud Pak for Integration Operations Dashboard
Cross-component transaction tracing to allow troubleshooting and investigation of errors and latency issues across
integration capabilities to ensure applications meet service level agreements.
IBM API Connect
Provides API management capabilities.
39. Installing Operator
The operators available to install are:
Cloud Pak for Integration
Top level CP4I operator that will install all other CP4I operators automatically. Use this to install the whole Cloud Pak in
one operation.
IBM Cloud Pak for Integration Platform Navigator
Provides a dashboard and central services for other CP4I capabilities. Should be installed for most CP4I installations.
IBM Cloud Pak for Integration Asset Repository
Stores, manages, retrieves and searches for integration assets for use within the IBM Cloud Pak for Integration and its
capabilities.
IBM Cloud Pak for Integration Operations Dashboard
Cross-component transaction tracing to allow troubleshooting and investigation of errors and latency issues across
integration capabilities to ensure applications meet service level agreements.
IBM API Connect
Provides API management capabilities.