Migrating to ACE v12 and modernising to containers was the topic of the TechCon 2021 virtual experience. It discussed migrating existing ACE/IIB/WMB deployments and assets to ACE V12/11 using the mqsiextractcomponents command. This allows existing BAR files to run unchanged on new integration nodes and independent integration servers alongside existing deployments, enabling staged migration. It also covered modernizing integration by moving to containers and taking advantage of new features in ACE like the development experience and serverless capabilities.
223: Modernization and Migrating from the ESB to Containers
1. TechCon 2021 1
Virtual Experience
TechCon 2021
Migrating to ACE v12
and modernising to
containers
August 25, 2021
David Coles
App Connect Enterprise Senior Developer
Trevor Dolby
App Connect Enterprise Architect
Application Integration | IBM Automation
2. TechCon 2021 2
Virtual Experience
IBMās statements regarding its plans, directions, and intent are subject to change or withdrawal without
notice at IBMās sole discretion. Information regarding potential future products is intended to outline our
general product direction and it should not be relied on in making a purchasing decision.
The information mentioned regarding potential future products is not a commitment, promise, or legal
obligation to deliver any material, code or functionality. Information about potential future products may
not be incorporated into any contract. The development, release, and timing of any future features or
functionality described for our products remains at our sole discretion.
Performance is based on measurements and projections using standard IBM benchmarks in a controlled
environment. The actual throughput or performance that any user will experience will vary depending
upon many factors, including considerations such as the amount of multiprogramming in the userās job
stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no
assurance can be given that an individual user will achieve results similar to those stated here.
Please Note
3. TechCon 2021 3
Virtual Experience
ā¢IBM, and the IBM logo are trademarks of International Business Machines Corporation, registered in many
jurisdictions.
ā¢Other company, product and service names may be trademarks, registered marks or service marks of their
respective owners. A current list of IBM trademarks is available on the web at "Copyright and trademark
information" ibm.com/legal/copytrade.shtml
Trademark Acknowledgements
4. TechCon 2021 4
Virtual Experience
ā¢ Migrating to ACEv12
ā¢ Introduction + fundamentals
ā¢ Migration concepts
ā¢ Migration specifics
ā¢ V11 -> V12
ā¢ V10 and earlier -> V11 or V12
ā¢ Any version -> V12
ā¢ Extract migration approaches
ā¢ Modernising to containers
ā¢ Evolution to agile integration
ā¢ Containers versus containerization
ā¢ ACE Development Experience
ā¢ ACE & Transformation Advisor
ā¢ Serverless ACE
6. TechCon 2021 6
Virtual Experience
ā¢ Customers have existing ACE/IIB/WMB deployments and assets and want to bring
these forward to ACE V12/11
ā¢ Existing BAR files should run unchanged
ā¢ Subject to as few exceptions as possible
ā¢ New V12/V11 integration nodes and independent integration servers can run
alongside existing integration nodes and brokers
ā¢ Supports staged migration
ā¢ Tools should be available to help migrate configurations from earlier releases
ā¢ Previously to V11 this was mqsimigratecomponents
ā¢ There should be help to migrate to new topologies and platforms
7. TechCon 2021 7
Virtual Experience
ā¢ V12 is a major release, henceā¦
ā¢ Migration is needed for source code and integration nodes
ā¢ Depending on source version and target topology some migration of people and
processes needed to
ā¢ V11 was a major re-engineering and so migration from pre-V11 versions represents a big
change
ā¢ If coming from v9 and earlier admin interfaces, admin security and interaction with MQ
have changed significantly
ā¢ For those coming from v10 the view will be more similar
ā¢ For those coming from v11 the view will be almost identical
ā¢ From V11 new topology options with independent integration servers
8. TechCon 2021 8
Virtual Experience
Process
Supervision
Admin Web UI
Internal
Cfg Store
Configurable
Service
Physical/
Virtual
Machine
HTTP Listener
BAR
Flow
Node
Process
Supervision
Admin Web UI
Physical/
Virtual
Machine
Node
Flow Policy
Public
Cfg Store
BAR Flow Policy
Public
Cfg Store Flow
Container
BAR
Policy
Flow Policy
ACE V11 and V12
HTTP
Listener
Admin Web UI
UNZIP
and
GO
!
IIB v10
9. TechCon 2021 9
Virtual Experience
MBX
Operator system
Scripts
Java
Web
QA or PROD system
Integration
node
Default QM
Integration
node
Default QM
src
code
Other
enterprise
apps
SAP
CICS
MQ
Developer system
Integration
node
Default QM
Toolkit
MBX
gone
Fork
source
code
Default QMs
needed?
Admin via
http not MQ
Connect
remote
MQ
Supported
versions?
Hardware
platform + OS
V10+
V10+
V10+
V10+
10. TechCon 2021 10
Virtual Experience
ā¢ Ensure youāre at the latest V12 fix pack level
ā¢ https://www.ibm.com/support/pages/recommended-fixes-ibm-app-connect-enterprise-ibm-
integration-bus-and-websphere-message-broker
ā¢ Review the Knowledge Center for known changes in behaviour to be aware of
ā¢ https://www.ibm.com/docs/en/app-connect/12.0?topic=migration-behavioral-changes-in-
version-120
ā¢ Review the changes of default behaviour in fixpacks tech doc
ā¢ http://www-01.ibm.com/support/docview.wss?rs=849&uid=swg27049142
ā¢ Review the release notes for V12
ā¢ https://www.ibm.com/support/pages/node/6457259
ā¢ Check out any APARs you have in previous versions to ensure theyāre in V12
ā¢ Consider installing V12 in a test environment to have a play
ā¢ Check out the tutorials
12. TechCon 2021 12
Virtual Experience
ā¢ Create new node or independent integration server
ā¢ Configure in the appropriate manner
ā¢ Watch out for the āunknownā changes which were made to the previous install
ā¢ Deploy bar files
ā¢ Run!
ā¢ Staged āsafeā approach
ā¢ Old systems remain working until new systems are proven
13. TechCon 2021 13
Virtual Experience
ā¢ Stop the node at the previous version
ā¢ Run mqsimigratecomponents at the new version
ā¢ Translates all deployed resources and configuration to that required by the new version
ā¢ Start the node at the new version
ā¢ Big-Bang approach
ā¢ What happens if things do not work as expected?
ā¢ mqsimigratecomponents had an āundoā option, but does not remove the risk of code defects
ā¢ System is down during the migration
ā¢ Sometimes the only approach if the original source files/configuration scripts are not available
ā¢ Used to be the only option when there was a 1 broker ā 1 queue manager relationship where the QM name
could not be changed.
ā¢ Not an issues after V10
ā¢ mqsimigratecomponents not available in V11 or V12
ā¢ Can achieve mostly the equivalent results with extract migration
14. TechCon 2021 14
Virtual Experience
ā¢ New command and approach to migration
ā¢ Extract configuration and resources from existing node (backup)
ā¢ Create independent integration server work directory
ā¢ Create integration node
ā¢ Cross platform
ā¢ Allows for migration from retired platforms
ā¢ Repeatable
ā¢ Can roughly achieve in-place migration with the addition of a delete and extract to the same
integration node name
ā¢ Works with backups from V7 and later
ā¢ V12 officially supports V10 and V11 backups
ā¢ V11 officially supports V9 and V10 backups
ā¢ Older backups supported in a limited manner
ā¢ Try it, it will probably work and let us know if it does not
ā¢ Can also be used to migrate to different topologies, platforms and clone integration nodes
15. TechCon 2021 15
Virtual Experience
1. Run mqsibackupbroker against an existing node
2. (optional) Transfer backup zip to new machine
3. Run mqsiextractcomponents against the backup specifying a name of the target integration node to extract to
ā¢ New integration node is created
ā¢ Can optionally delete an existing node with the same name as part of the extract
ā¢ Matching integration server created for each server of the original node
ā¢ Slightly different behaviours dependent on the source version of the backup
ā¢ V10-
ā¢ Deployed resources and containers are written to the run directory of each of the new servers
ā¢ Each server gets a default application based on the name of the server if required
ā¢ Configurable services are converted to policy and added to the node level āDefaultPoliciesā policy project and so are available to all
servers
ā¢ Registry settings are written to node.conf.yaml
ā¢ Resource manager settings from each server are written to the new serverās server.conf.yaml
ā¢ V11+
ā¢ Node and server conf.yaml files and run directory contents are copied across with minimal changes
ā¢ Just checked for removed options and policy types
ā¢ Credentials are copied across
ā¢ Webadmin users and setdbparms credentials are copied across
4. Run mqsistart against the newly created integration node
5. Flows continue to work as before
ā¢ V10- References to configurable services are returned the new policies
16. TechCon 2021 16
Virtual Experience
Resource
managers server.conf.yaml
Policies
Configurable
services
ISR policies
Registry
DSN data DSN data
Deployed resources
Deployed (sub)flows
Run directory
Node level default policy project
node.conf.yaml
Node configuration
Web users Web users
17. TechCon 2021 17
Virtual Experience
server.conf.yaml
DSN data
Run directory
node.conf.yaml
Web users
server.conf.yaml
DSN data
Run directory
node.conf.yaml
Web users
Credentials
Credentials
Credentials
Credentials
18. TechCon 2021 18
Virtual Experience
1. Run mqsibackupbroker against an existing node
2. (optional) Transfer backup zip to new machine
3. Run mqsiextractcomponents against the backup specifying a new work-directory for an independent integration
server + server name to extract
ā¢ New work-directory structure is created
ā¢ Slightly different behaviours dependent on the source version of the backup
ā¢ V10-
ā¢ Deployed resources and containers are written to the run directory
ā¢ Each server gets a default application based on the name of the server if required
ā¢ Configurable services are converted to policy and deployed
ā¢ Registry and resource manager settings are written to server.conf.yaml
ā¢ V11+
ā¢ Applicable node.conf.yaml properties added to server conf.yaml files and copied across
ā¢ Run directory contents are copied across with minimal changes
ā¢ Conf.yaml and run directory contents checked for removed options and policy types
ā¢ Node level policy project copied into run directory
ā¢ Server Credentials are copied across
ā¢ Setdbparms credentials are copied across
4. Start an independent integration server pointing at the new work-directory
5. Flows continue to work as before
ā¢ V10- References to configurable services are returned the new policies
19. TechCon 2021 19
Virtual Experience
Resource
managers
server.conf.yaml
Policies
Configurable
services
ISR policies
Registry
DSN data DSN data
Deployed resources
Deployed (sub)flows
Run directory
Policy project
Node configuration
Web users
20. TechCon 2021 20
Virtual Experience
server.conf.yaml
Run directory
DSN data
Credentials
server.conf.yaml
DSN data
Run directory
node.conf.yaml
Web users
Credentials
Credentials
Node level default policy project
22. TechCon 2021 22
Virtual Experience
ā¢ Migrating from V11 -> V12 should be no more troublesome than applying a fixpack
in terms of configuration and behavioural changes
ā¢ No massive re-engineering or changes to administration practices in V12
ā¢ Flows, deployed artifacts, config should all work
ā¢ The major differentiator from a fixpack upgrade is the need to create new
integration nodes at V12
ā¢ Possibly using mqsiextractcomponents to aid that creation
ā¢ This is required because V12 commands and tooling only operates against V12
nodes and V11 against V11
23. TechCon 2021 23
Virtual Experience
ā¢ HTTP(S)Connector policies removed
ā¢ Deprecated in 11.0.0.5
ā¢ Use the yaml resource-manager settings
ā¢ See later slides on any version -> V12 gotchas
ā¢ JSON validation
ā¢ Local webuser admin password
24. TechCon 2021 24
Virtual Experience
ā¢ It is possible to run a V12 independent server against a V11 created work-directory
ā¢ But why would you?
ā¢ Independent servers and their configuration are designed to be short lived and run from
containers
ā¢ Ideally you would swap V11 for V12 in your pipeline and build and configure new work-directories
using V12 and spin up new containers
ā¢ To run against a V11 created work-directory you will need to handle the migration steps
which mqsiextractcomponents takes care of when migrating an integration node
ā¢ HTTP(S)Connector policies will fail to load so the configuration will have to be changed manually
ā¢ The webuser password hash algorithm property will need to configured, or users will need to
reset their passwords
26. TechCon 2021 26
Virtual Experience
ā¢ V11 was a massive re-engineering release that requires significant reorganization of
deployed resources and configuration during migration using
mqsiextractcomponents
ā¢ Some old nodes and resources have been removed requiring customer action to
rework flows
ā¢ Some administration commands have been removed or significantly changed
requiring customer action to rework administration practices
27. TechCon 2021 27
Virtual Experience
ā¢ Removed nodes
ā¢ SCA / DecisionServices (replaced with ODMRules) / PHP / pre-v8 mapping nodes
ā¢ Configurable services -> Policies
ā¢ Used defined configurable services -> User defined policies
ā¢ Java lookup code will need to be changed
ā¢ Massive Integration API changes
ā¢ Watch our if using IAPI from Java Compute nodes
ā¢ Rest APIv1 -> Rest APIv2
ā¢ IBX -> Webui
ā¢ Some commands removed + some changed in behaviour
ā¢ Platform coverage
ā¢ mqsiextractcomponents can help
ā¢ Post deploy overrides not extracted
ā¢ Stats, monitoring, flow start/stop state
ā¢ Deploy info not extracted
29. TechCon 2021 29
Virtual Experience
ā¢ Before V12 the JSON parser did not support validation and so it used to silently ignore if it was called
with validation enabled
ā¢ From V12 the parser is validation aware so it checks to see if validation is enabled, and if it is, it
expects to be supplied with the name of a valid JSON schema file
ā¢ If not configured appropriately then an exception is thrown
ā¢ Users may hit unexpected problems on migration with āvalidā flows and so there are resource
manager settings to disable the schema lookup exceptions
JSON:
#disableSchemaLookupExceptionWhen: '' # Comma-separated list of events that will not throw exceptions when validation is
# requested of the JSON parser and a JSON schema cannot be loaded.
# When an event in the list occurs, no exception will be thrown and instead the JSON
# parser will continue to parse but without validation.
# Valid events are 'notSpecified', 'notFound'. If unset will default to '' which means
# that no exceptions are disabled.
# 'notSpecified' disables the BIP5736 exception that is thrown if a JSON schema name is
# not supplied.
# 'notFound' disables the range of exceptions (BIP1318, BIP1312, BIP5737, BIP5738,
# BIP5739) that are thrown if a JSON schema name is supplied, but cannot be
# resolved.
# This option is available to allow behaviour from previous releases to be maintained.
30. TechCon 2021 30
Virtual Experience
ā¢ The default hashing algorithm for local webuser passwords has changed in V12 from SHA-1 to using
PBKDF2 with SHA-512
ā¢ On migrate the web users are copied across as-is and so the passwords remain hashed using their
original algorithm
ā¢ A special algorithm property is also set in the overrides conf.yaml to indicate that the original hashing
algorithm is in use
ā¢ mqsiextractcomponents also issues a warning about this setting being added
ā¢ It is ok to run with this setting, but we recommend that it is removed and users then reset their
passwords to take advantage of the new, stronger, hashing algorithm
RestAdminListener:
webUserPasswordHashAlgorithm: SHA-1
32. TechCon 2021 32
Virtual Experience
ā¢ mqsiextractcomponents is not a complete like for like replacement of mqsimigratecomponents in-
place approach, but the same result can be achieved, particularly when using a V11 backup
ā¢ You can backup and delete the existing node, then use mqsiextractcomponents to create a new node
with the same name
ā¢ If using a V11 backup then your new V12 integration node should now function and be configured as
your V11 one was
ā¢ However if using a V10 or earlier backup some configuration is deliberately not extracted (statistics,
monitoring, flow/app start/stop state) and will need to be configured in V11 style
ā¢ For V10 or earlier backups you will also still need to update any flows that use the removed nodes,
Integration API, or access user-defined configurable services/policies
33. TechCon 2021 33
Virtual Experience
ā¢ Side by side is the favoured approach
ā¢ Use mqsiextractcomponents to extract the intent of your configuration and get it into V11/12 formats
ā¢ Take the generated .policyxml files and copy them into a policy project in your toolkit ready for deployment to your new
node/server
ā¢ Take the generated .conf.yaml files and check them into source control and use them as-is to configure your new
node/server
ā¢ Or use the properties in them to help configure a new node/server
ā¢ With the extracted .dfmxml and .dictionary files you get some form of āsourceā for artifacts which you had previously
deployed and then lost the original source message flow/set files
ā¢ After extracting, you could delete all of the deployed artifacts from the run directories so youāre left with a
configured node
ā¢ Then re-deploy everything just to make sure you can
ā¢ Extract a whole node to get the configuration, move out all of the directories (integration servers) in the servers
directory, then move each server back in one by one to migrate a server at a time
ā¢ To pickup any changes since the original backup, just redo the backup and run extract again targeting a
temporary node, then just copy the newly extracted files or settings into your original node/server
V10-
V10-
V10-
V11+
34. TechCon 2021 34
Virtual Experience
ā¢ As V12 mqsiextractcomponents supports extracting from a V12 backup you could
use the command to help start a topology migration
ā¢ Backup your V12 integration node
ā¢ Run mqsiextractcomponents and target an independent integration server work
directory and extract one of the integration servers from the integration node
ā¢ You will then have a configured independent integration server work directory which
can be used as the basis for moving the server to a container topology
35. TechCon 2021 35
Virtual Experience
ā¢ As V12 mqsiextractcomponents supports extracting from a V12 backup you could use the command
to clone an existing integration node
ā¢ Backup your V12 integration node
ā¢ Run mqsiextractcomponents and target a new integration node
ā¢ This could be on the same machine but have a different name
ā¢ Or be on a different machine, and maybe even different architecture
ā¢ For example, a Windows to zLinux move
ā¢ The new integration node should be configured identically to the original
ā¢ If cloning to the same machine you will need to take note of the flows deployed and any
requirements for things to be isolated or configuration which may need to be updated to avoid
clashes
ā¢ For example, admin or http ports, default queue manager names, aggregation node queue
manager settings
36. TechCon 2021 36
Virtual Experience
ā¢ You could also use mqsiextractcomponents to help you recover/extend/duplicate
your integration nodes at previous versions
ā¢ The sparse .conf.yaml files show you what properties you need to configure to
replicate most of the old configuration
38. TechCon 2021 38
Virtual Experience
People & Process Architecture Technology
How can we improve
development agility in
order to accelerate
innovation?
Development Agility
How can we improve
build independence and
increase production
velocity?
Deployment Agility
How can we improve our
ability to deliver dynamic
scalability and inherent
resilience?
Operational Agility
39. TechCon 2021 39
Virtual Experience
APIM
APIM
API Management
APIM
API Management
APIM
Gateway
Integration
Integration Int.
Engagement
applications
Systems
of
record
API Management
Socialization/monetization Re-platforming Application autonomy
API Management
Centralized
ESB
Fine-grained
integration
deployment
Decentralized
integration
ownership
Socialized APIs
Webinars http://ibm.biz/agile-integration-webcasts eBooklet http://ibm.biz/agile-integration-ebook IBM Redbook http://ibm.biz/agile-integration-redbook
APIM
APIM
40. TechCon 2021 40
Virtual Experience
First split by business domains and functional areas to ensure high-level autonomy
integration artefact
integration runtime
Domain 1 Domain 2
Domain 3
before after
All Domains
41. TechCon 2021 41
Virtual Experience
Next, consider non-functionals such as a) which need a separate pipeline (for agility), b) which
need independent scalability, c) which have unique resilience requirements
41
before after
Domain 1 Domain 1
Shared lifecycle
(e.g. shared data
model)
Stabile requirements
(e.g. integrations that havenāt
needed changes in years)
Specific resilience
(e.g. require high replication
factor to achieve availability)
Inter-related
availability
(e.g. reliant on the
availability of one
another)
Technical dependencies
(e.g. all require local
MQ server due to 2PC)
Elastic scalability
(e.g. occasional
high spikes in
usage)
http://ibm.biz/aia-granularity
integration artefact
integration runtime
43. TechCon 2021 43
Virtual Experience
ā¢ ACE does not store persistent data, and is best seen as application layer
ā¢ MQSI (long ago) would have fit on the other side
ā¢ Code as well as configuration
ā¢ BAR files are not the same as MQSC files
Application layer (transient)
Java
Node.js
ACE
Python
Persistence layer
Database
MQ
State
storage and
messaging
44. TechCon 2021 44
Virtual Experience
Asset repo
BAR1 BAR2
BAR3
ACE DFE 2
ACE DFE 1
ACE Integration Node
App2
App3
App1
Source repo
App1 App2
App3 BAR build Deploy BARs and
manage integration servers
Manage integration
infrastructure
ā¢ Many existing installations use integration nodes managed by dedicated
administrators
ā¢ Until ACE v11 this was the only option, and might still be appropriate
ā¢ For example, if an organisational strategy for containers is still in progress
45. TechCon 2021 45
Virtual Experience
Container infrastructure
ā¢ Drivers
ļ¼ Use containers as lightweight VMs to
eliminate on-prem hardware
ļ¼ Keeps admin model the same for ACE
ļ¼ Might save on hardware
ACE DFE 2
ACE DFE 1
ACE Integration Node
App2
App3
App1
Container
Manage container
infrastructure
Manage integration
infrastructure
ā¢ Challenges
ļ¼ Not what is usually meant by
ācontainerizationā
ļ¼ Does not provide any modernization
benefits
ļ¼ Not recommended
46. TechCon 2021 46
Virtual Experience
ā¢ Drivers
ļ¼ Preserving admin model reduces cost of
transition in the short term
ļ¼ Lighter containers than option 1
ā¢ Challenges
ļ¼ Still managed as mini-nodes, limits
modernization gains
ļ¼ Still need patching and potential
maintenance window
ļ¼ Containers may need persistent storage,
complicating container management
Container infrastructure
ACE server1 ACE server2
App2
App3
App1
Container
Container
Manage container
infrastructure
Manage integration
infrastructure
47. TechCon 2021 47
Virtual Experience
ā¢ Drivers
ļ¼ Industry-standard tools in the pipeline
and image build, simplifying management
and operations
ļ¼ + Container-based operation allows for
rolling updates and zero downtime
ā¢ Challenges
ļ¼ Significant change in admin model, requiring greater
container literacy on the part of integration staff.
ļ¼ Often depends on company-wide cloud and container
strategy (if one exists)
ļ¼ Requires placement of correct combination of apps in servers
Source repo
App1 App2
App3
Image build and deploy
Manage
container infrastructure
ACE server1 ACE server2
App2
App3
App1
Container
Container
Container
infrastructure
48. TechCon 2021 48
Virtual Experience
ā¢ Drivers
ļ¼ Industry-standard tools in the pipeline and image
build, simplifying management and operations
ļ¼ Lighter containers
ļ¼ Each application in a separate container to scale
better and allow independent management
ļ¼ Often regarded as the āmicroserviceā approach;
might go as far as to put each flow in a container
ā¢ Challenges
ļ¼ Significant change in admin model, requiring greater
container literacy on the part of integration staff
ļ¼ Often depends on company-wide cloud and
container strategy (if one exists)
ļ¼ Overhead in memory and
management/licensing/cost
ļ¼ Not always needed; only in very-high-traffic
scenarios would this be essential
Source repo
App1 App2
App3 Image build and deploy
ACE server1
App1
Container
ACE server2
App2
Container
ACE server3
App3
Container
Manage
container infrastructure
Container
infrastructure
50. TechCon 2021 50
Virtual Experience
Server
Test
Server
Test
Server
Test
ā¢ Components built in containers or VMs (platform- and technology-dependent)
ā¢ Main server build integrates the runtime components; uses large-scale container testing for validation
ā¢ Combined with the toolkit for more container testing, and then feeds into ACEcc and ACEoC as well as
software and fixpack builds
ā¢ Enables agility in implementing features, fixing defects, adding platform variants, etc
Message
catalogs
Container
infrastructure
Common
Java
Web UI
. . .
Toolkit
build and test
Server
Build Server
Test
Combined
Build
Server
Test
Server
Test
Server
Test
Container
infrastructure
Combined
Test
ACEcc
Image
ACE on
Cloud
ACE
software
Windows as well as
Linux
52. 52
Assess and analyze applications, integrations and
supporting infrastructure and services
Discovery
Inventory and
dependencies
Assessment
Classify simple / moderate /
complex based on target
compatibility
Code
Assistance
Potential issues,
severity details,
possible solutions
and estimated
resolution effort
Included and
deployed on IBM
Cloud Paks
TA modernizes
WebSphere,
WebLogic, MQ and
ACE/IIB deployments
Provides recommendations and automation
for App Modernization
Migration
Automation
Generate build and
deploy artifacts to
help on-ramp to
selected target
platform and runtime
Holistic
Selectively
modernize all the
components of your
Business Application
as a unit
http://ibm.biz/cloudta
53. TechCon 2021 53
Virtual Experience
ā¢ Text goes here lorem ipsum dolor
sit amet, consectetur adipiscing
elit.
Complexity Java EE IBM MQ IIB
No code
changes
needed
DNS
reconfiguration
required
Admin
change is
required
Code
changes
required
Cluster
reconfiguration,
changing custom
logic (e.g. Exits)
Development
change is
required
Incompatible
technologies;
external
dependencies
Client
Authentication
reconfiguration
Difficult
development
task or
alternate
technology is
required
Simple
Moderate
Complex
Severity IIB
No immediate action is required,
but you may wish to be aware.
Immediate action is probably
required or advised before
you proceed.
You cannot proceed without
taking remedial action.
Error
Warning
Info
58. TechCon 2021 58
Virtual Experience
ā¢ āServerlessā has been used to describe quite a lot of different
approaches and technologies
ā¢ Absence of an application server rather than a physical
machine
ā¢ ACE CI/CD pipeline can deploy as a serverless application as
well as deploying to containers and integration nodes
ā¢ Goal: avoid running an integration server all the time
ā¢ Licensing cost is lower
ā¢ Resource consumption lower overall
ā¢ Two forms: container-based scaling and Function-as-a-Service
ā¢ Knative, KEDA, etc in first category
ā¢ Amazon Lambda and IBM Cloud Functions in the second
Cloud
Function
ACE integration node
running on-prem
59. TechCon 2021 59
Virtual Experience
ā¢ Knative acts as a front-end for ACE
ā¢ Receives HTTP traffic and starts/stops
containers as needed
ā¢ Will scale to zero after inactivity timeout
ā¢ ace-demo-pipeline has knative option
ā¢ KEDA handles scaling only
ā¢ ACE message flows get messages
directly from the MQ queue manager (or
other messaging source such as Kafka)
ā¢ Polling-based approach to scaling, where
the MQ queue depth determines how
many containers will be started
ā¢ Will scale to zero after inactivity timeout
ā¢ Both require a complete ACE container with
applications and credentials already set up
Containers
ACE
app
Containers
ACE
app
MQ
Queue
Manager
Application
traffic (MQ)
Application traffic (HTTP)
Knative
infra-
structure
KEDA infrastructure
MQ messages
picked up by
ACE message flow
Queue depth polling
Container
scaling
Container
scaling
HTTP
data
60. TechCon 2021 60
Virtual Experience
ā¢ Running ACE flows as one-shot containers
ā¢ Zero overhead when not actively working
ā¢ Container infrastructure abstracted: could be using
any technology that runs containers
ā¢ Cloud Foundry, Kubernets, Docker Swarm, etc
ā¢ More setup/teardown, and connections would have
to be made every time
ā¢ Startup latency a concern in many cases
ā¢ Require more reengineering of applications and
deployment pipelines in many cases
ā¢ FaaS best suited to short-lived applications that do
not block on network interactions
ā¢ Matches traditional out-and-back ACE message flow
architecture in use since MQSI v2
ā¢ Much higher latency than the traditional server
ā¢ Works best with occasionally-used flows that can
accept high latency, or in situations where container
management needs to be abstracted out
One-Shot
Container
ACE
app
Application
traffic (MQ)
Application
traffic (HTTP)
Cloud
Function
infra-
structure
Application
can connect
to database
One-Shot
Container
ACE Request
Cloud
Function
infra-
structure
Kafka
Service
One-Shot
Container
ACE
Response
Cloud
Function
infra-
structure
DB
61. TechCon 2021 61
Virtual Experience
ā¢ Function-as-a-Service has a higher latency but no idle time
ā¢ Knative and KEDA use the containers for longer after they have started
ā¢ This example shows a short-lived flow, which might not be ideal for FaaS; longer-running flows would
amortize the startup cost over a greater period and be more efficient
Startup
Flow
run
Time
30 seconds 60 seconds 90 seconds 120 seconds
Startup
Flow
run1
Flow
run2
Flow
run3
Idle
FaaS
Knative or
KEDA
Container
scaling
Container
start
One request
sent to the
server
Container
shutdown
Multiple
requests sent
to the server
This slide exists due to some ongoing confusion around where ACE fits in: for historical reasons, apparently, there is still the idea that ACE is similar to MQ!
People challange
People challange
IBM Cloud Transformation Advisor helps you access, analyze and modernize middleware based apps into IBM Cloud(s). It categorizes Java EE apps and MQ queue managers as simple, medium and complex based on migration complexity and provides guidance for modernization.
The ACE implementation of the Transformation Advisor framework
Available through the ACE console since 11.0.0.7
Built on https://github.com/IBM/transformation-advisor-sdk since 11.0.0.8
Detects 35 different potential issues and concerns (Rules) when migrating to a docker based cloud architecture
https://www.ibm.com/support/knowledgecenter/SSTTDS_11.0.0/com.ibm.etools.mft.doc/bh23394_.html
Works on backups and bar files
Formally supported on V9+, may also work on backups from V7 & V8.