The document discusses the Change and Transport System (CTS) in SAP which helps organize development projects and transport changes between systems, it explains the data structure and customizing in an R/3 system with multiple clients, and provides an overview of how to set up a transport landscape by configuring the transport domain controller, defining transport routes, and implementing a QA approval procedure.
The document proposes using a Kanban board to help manage the migration workflow between operations and project management teams. The board would use cards to visually represent each migration change moving through stages of ready, review, assessment, approval, build and test, deployment approval, and implementation. Cards would include details of the change and responsible parties. All teams would have visibility into what stage each change is at. If changes aren't progressing, project managers could intervene to resolve delays. The board aims to improve collaboration and provide transparency throughout the migration process.
The document discusses configuring the Transport Management System (TMS) in SAP. Some key points:
1. TMS must be configured before using it to transport changes between SAP systems. This includes defining a transport domain, configuring transport routes, and designating a transport domain controller.
2. The transport domain controller manages the overall configuration and transport of changes. It should typically be a production or quality assurance system.
3. Virtual SAP systems can be configured in TMS to model future systems before they are implemented.
4. TMS generates and manages the transport profile for the transport control program tp, which contains database connection details for systems in the transport domain.
The document provides an overview of iElect5, a benefits administration platform. It describes the platform's architecture including modules for HR administration, census data intake management, document management, and reporting. It also outlines iElect5's process for loading census data, configuring user and client settings, and determining feature releases through customer and market feedback on both minor monthly and major semi-annual releases. Finally, it shares the product roadmap for the next year.
SAP's Transport Management system (CTS) manages changes and transports for all R/3 systems. It enables administrators to manage change requests and streamline change management across development, test, and production systems. The document discusses transport layers, routes, strategies, and functions like transport organizers and change requests that allow organizing development projects and transporting changes between systems.
The document announces a seminar hosted by IBM and Peanuts on Tivoli Storage Manager 6.4. The agenda includes introductions, presentations on TSM 6.3 and 6.4 features like deduplication and replication over LAN/WAN, new TSM reporting, TSM and virtualization, and a demonstration.
The document outlines the process for planning and load building, which includes receiving order information, identifying shipment size and modal criteria, selecting transport mode, consolidating outbound shipments, and notifying customers of built loads. It then describes routing shipments by consolidating carrier information, identifying available carriers by mode, evaluating carriers and routes, matching routes to available carriers, and choosing a bid. Finally, it covers tendering the load, validating and signing paperwork, transporting the load, and delivering the shipment.
This document discusses high availability and resiliency strategies for Microsoft Lync Server 2010. It covers resiliency architectures for branch offices and data centers. For branch offices, it describes the Survivable Branch Appliance which provides basic voice functionality when the WAN is down. For data centers, it explains how Lync pools can fail over to a backup data center and how "paired Standard Edition pools" provide resiliency. The document aims to outline resiliency architectures and capabilities for branch offices and data centers.
Lync Server 2010: High Availability [I3004] Fabrizio Volpe
stores information about the Lync Assistant application
Archiving: stores information about the Lync Archiving application
Monitoring: stores information about the Lync Monitoring application
Compliance: stores information about the Lync Compliance application
Conferencing: stores information about the Lync Conferencing application
Edge: stores information about the Lync Edge application
Exchange: stores information about the Lync Exchange integration application
External: stores information about the Lync External application
Federation: stores information about the Lync Federation application
IM: stores information about the Lync IM application
Provisioning: stores information about the Lync Provisioning application
Voice: stores information about
The document proposes using a Kanban board to help manage the migration workflow between operations and project management teams. The board would use cards to visually represent each migration change moving through stages of ready, review, assessment, approval, build and test, deployment approval, and implementation. Cards would include details of the change and responsible parties. All teams would have visibility into what stage each change is at. If changes aren't progressing, project managers could intervene to resolve delays. The board aims to improve collaboration and provide transparency throughout the migration process.
The document discusses configuring the Transport Management System (TMS) in SAP. Some key points:
1. TMS must be configured before using it to transport changes between SAP systems. This includes defining a transport domain, configuring transport routes, and designating a transport domain controller.
2. The transport domain controller manages the overall configuration and transport of changes. It should typically be a production or quality assurance system.
3. Virtual SAP systems can be configured in TMS to model future systems before they are implemented.
4. TMS generates and manages the transport profile for the transport control program tp, which contains database connection details for systems in the transport domain.
The document provides an overview of iElect5, a benefits administration platform. It describes the platform's architecture including modules for HR administration, census data intake management, document management, and reporting. It also outlines iElect5's process for loading census data, configuring user and client settings, and determining feature releases through customer and market feedback on both minor monthly and major semi-annual releases. Finally, it shares the product roadmap for the next year.
SAP's Transport Management system (CTS) manages changes and transports for all R/3 systems. It enables administrators to manage change requests and streamline change management across development, test, and production systems. The document discusses transport layers, routes, strategies, and functions like transport organizers and change requests that allow organizing development projects and transporting changes between systems.
The document announces a seminar hosted by IBM and Peanuts on Tivoli Storage Manager 6.4. The agenda includes introductions, presentations on TSM 6.3 and 6.4 features like deduplication and replication over LAN/WAN, new TSM reporting, TSM and virtualization, and a demonstration.
The document outlines the process for planning and load building, which includes receiving order information, identifying shipment size and modal criteria, selecting transport mode, consolidating outbound shipments, and notifying customers of built loads. It then describes routing shipments by consolidating carrier information, identifying available carriers by mode, evaluating carriers and routes, matching routes to available carriers, and choosing a bid. Finally, it covers tendering the load, validating and signing paperwork, transporting the load, and delivering the shipment.
This document discusses high availability and resiliency strategies for Microsoft Lync Server 2010. It covers resiliency architectures for branch offices and data centers. For branch offices, it describes the Survivable Branch Appliance which provides basic voice functionality when the WAN is down. For data centers, it explains how Lync pools can fail over to a backup data center and how "paired Standard Edition pools" provide resiliency. The document aims to outline resiliency architectures and capabilities for branch offices and data centers.
Lync Server 2010: High Availability [I3004] Fabrizio Volpe
stores information about the Lync Assistant application
Archiving: stores information about the Lync Archiving application
Monitoring: stores information about the Lync Monitoring application
Compliance: stores information about the Lync Compliance application
Conferencing: stores information about the Lync Conferencing application
Edge: stores information about the Lync Edge application
Exchange: stores information about the Lync Exchange integration application
External: stores information about the Lync External application
Federation: stores information about the Lync Federation application
IM: stores information about the Lync IM application
Provisioning: stores information about the Lync Provisioning application
Voice: stores information about
This document outlines a project to optimize an existing service monitoring console (SMC) within a service-oriented architecture framework. The objectives are to investigate data loss issues, compare SMC to an alternative console (TMC), and design an optimized monitoring solution. Key activities include tuning data storage scripts, comparing consoles quantitatively, mapping SMC features to TMC, and improving performance. The timeline outlines tasks like analyzing existing code, creating sample services, and developing enhancements over 16 weeks.
The document describes the settings for clients in a one-system SAP landscape. Each client has a specific role and customizing/development restrictions. The production client is PRD, with other clients like CUST for customizing, SAND for sandbox testing, and QTST/TRNG for integration/training. Changes are transported between clients using client copy or transport requests, following the described processes.
WSO2 Customer Webinar: WEST Interactive’s Deployment Approach and DevOps Prac...WSO2
To view recording please use below URL:
http://wso2.com/library/webinars/2016/06/west-interactives-deployment-approach-and-devops-practices/
For nearly 30 years West Interactive Services has been creating communication solutions that empower enterprises worldwide to strengthen customer engagement. As a customer of WSO2 since 2012, WEST has built solutions using WSO2 API Manager, WSO2 Business Activity Monitor (WSO2 BAM), WSO2 Enterprise Service Bus (WSO2 ESB), WSO2 Data Services Server (WSO2 DSS), WSO2 Application Server and WSO2 Identity Server which facilitate nearly 300 million unique customer interactions each month.
The most recent deployment with WSO2 allows WEST interactive to expose client connections, data sources and application logic through a common protocol and messaging architecture. This is achieved using a combination of WSO2 API Manager, WSO2 ESB, WSO2 DSS, WSO2 Application Server and WSO2 Message Broker. This webinar will discuss the DevOps related theories and practices that have been followed by WEST during the process of designing, building and maintaining this part of the solution. These will address the following areas:
Design process of the solution
Deployment and production hardening practices
Runtime artifacts and lifecycle management
DevOps, virtualization and automation
Troubleshooting and debugging practices
Donnie Prakoso, Technology Evangelist, ASEAN, AWS.
Container technology provides unparalleled improvements in efficiency and agility of packaging and deploying applications. Containers offer VM-like isolation and process-like efficiency and hence are becoming the de-facto method for deploying micro-services. However, using containers for running services at scale has required that operations team handle complex, dynamically changing infrastructure requirements, or run the risk or under- or over-provisioning infrastructure. Sounds like going back to the days before Cloud? In this session, learn how AWS services for containers take the pain out of managing infrastructure, and best practices for developing new services rapidly while running them at scale.
The Enterprise IT Checklist for Docker Operations Nicola Kabar
Enterprises often have hundreds of legacy applications developed by development teams across multiple business units. This presents a series of challenges to IT teams as they architect and support a complex and diverse IT environment. Add to that Docker, containers, and cloud - going beyond the pilot environment to production requires both the technology and best practices. In this session, we will go through a checklist of considerations and best practices providing a framework for smooth Docker production operations.
Learn how SQL Server on AWS gives you complete control over every setting, without the maintenance, backup and patching requirements of traditional on-site solutions. Discover how to provision and monitor your SQL Server databases in both Amazon RDS and Amazon EC2, and how to optimise scalability, performance, availability, security and disaster recovery.
Tackle Containerization Advisor (TCA) for Legacy ApplicationsKonveyor Community
Recording of presentation: https://youtu.be/VapEooROERw
With the adoption of cloud services and the reliability and resiliency it offers, enterprises are eager to understand how many of their legacy applications can be containerized.
We propose Tackle Containerization Advisor (TCA), a framework that provides a containerization advisory for legacy applications.
Given an application description in terms of its technical components, TCA proposes a multi-step process that standardizes the raw inputs and curates technology stack into various components, detects missing components and finally recommends the best possible containerization approach.
Presenter: Anup Kalia, Research Staff Member @ IBM Research
GitHub: https://github.com/konveyor/tackle-container-advisor
Modernizing Testing as Apps Re-ArchitectDevOps.com
Applications are moving to cloud and containers to boost reliability and speed delivery to production. However, if we use the same old approaches to testing, we'll fail to achieve the benefits of cloud. But what do we really need to change? We know we need to automate tests, but how do we keep our automation assets from becoming obsolete? Automatically provisioning test environments seems close, but some parts of our applications are hard to move to cloud.
The document discusses BloomReach's efforts to scale their data infrastructure to support hundreds of millions of documents. They implemented an elastic infrastructure called BC2 that dynamically provisions and scales Solr and Cassandra clusters in the cloud on demand. This allows each pipeline or job to have isolated resources, improves performance and stability over sharing clusters, and provides cost savings through only provisioning necessary resources.
Client/server computing is an architecture where thin client machines make requests to centralized servers for applications and data. A basic definition is that a client makes a request for data from a server, which then returns the results. The major focus in client/server systems is on software, with most application processing done on the client side and services like databases accessed from the server side. Common types of servers include file servers, data servers, compute servers, database servers, and communication servers.
Azure Container Apps provides a serverless platform for building and deploying containerized microservices applications that automatically scale based on events, with the ability to use any programming language or framework. It integrates with open source tools like KEDA for event-driven autoscaling and Dapr for service invocation and state management to simplify building distributed microservices architectures in the cloud. The document demonstrates how to build a serverless retail application using Azure Container Apps, Cosmos DB, and Service Bus with .NET microservices that scale independently based on events.
Simplify and Scale Enterprise Spring Apps in the Cloud | March 23, 2023VMware Tanzu
- Azure Spring Apps is a fully managed service for deploying and managing Spring Boot apps in the cloud without having to learn or manage Kubernetes. It provides auto-scaling, security, high availability, and auto-patching capabilities.
- Managing software updates and security patches across multiple components like apps, dependencies, JDKs, OSes, Kubernetes, etc. is challenging due to the large volume of updates and need for testing and approvals. Azure Spring Apps reduces this burden through auto-patching which applies critical security updates automatically during scheduled maintenance windows.
- Auto-patching helps customers stay ahead of security threats and vulnerabilities by proactively applying patches for exposed issues like Log4j, OpenSSL vulnerabilities,
Just over a year ago (before becoming the full time chair and advocate of QCon London, San Francisco, and New York), my main role was with HPE as the principal architect for a client in the US public sector.
The systems we supported were responsible for personnel information, scholarships decisions, and record management. Like so many others, we were also faced with legacy applications, COTS product integrations, polyglot code bases, and often brittle deployments. In an effort to decouple code bases and address some of these issues, we started advocating for a Microservice architecture and trying to distinguish it from the SOA practices of the past.
Now, it’s a year later. I have had the incredible opportunity to have access to architects, engineers, and leaders from some of the world’s more respected software companies. These are companies like Uber, Microsoft, Netflix, Apple, Google, Slack, Pinterest, and Etsy. I’ve had the chance to have one-on-one discussions with Chief Architects, developers, and engineers building the apps I most admire and use every day (some leveraging Microservices, some embracing Monoliths, and others falling somewhere in between).
Patterns & Practices of Microservices is some of the things I wish I knew before beginning a push towards Microservices just over a year ago. It’s the practices of companies leveraging Microservices, it’s the technology tradeoffs when deciding between Monoliths and Microservices, and it’s the advice I’ve heard in interviewing, podcasting, and iterating on presentations from software giants like Adrian Cockcroft, Matt Ranney, Josh Evans, Martin Thompson, and literally hundreds of other engineers who drop knowledge at QCons around the world.
This is a information-packed presentation on data migration made by BWIR, global solutions and services partner to SolidWorks Enterprise PDM. This was showcased at SolidWorks World 2011 and the presentation talks about data migration from other PDM/PLM systems to SolidWorks EPDM.
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with K...confluent
Microservices, events, containers, and orchestrators are dominating our vernacular today. As operations teams adapt to support these technologies in production, cloud-native platforms like Cloud Foundry and Kubernetes have quickly risen to serve as force multipliers of automation, productivity and value. Kafka is providing developers a critically important component as they build and modernize applications to cloud-native architecture. This talk will explore:
• Why cloud-native platforms and why run Kafka on Kubernetes?
• What kind of workloads are best suited for this combination?
• Tips to determine the path forward for legacy monoliths in your application portfolio
• Running Kafka as a Streaming Platform on Container Orchestration
Patterns and Pains of Migrating Legacy Applications to KubernetesQAware GmbH
Open Source Summit 2018, Vancouver (Canada): Talk by Josef Adersberger (@adersberger, CTO at QAware), Michael Frank (Software Architect at QAware) and Robert Bichler (IT Project Manager at Allianz Germany)
Abstract:
Running applications on Kubernetes can provide a lot of benefits: more dev speed, lower ops costs and a higher elasticity & resiliency in production. Kubernetes is the place to be for cloud-native apps. But what to do if you’ve no shiny new cloud-native apps but a whole bunch of JEE legacy systems? No chance to leverage the advantages of Kubernetes? Yes you can!
We’re facing the challenge of migrating hundreds of JEE legacy applications of a German blue chip company onto a Kubernetes cluster within one year.
The talk will be about the lessons we've learned - the best practices and pitfalls we've discovered along our way.
Patterns and Pains of Migrating Legacy Applications to KubernetesJosef Adersberger
Running applications on Kubernetes can provide a lot of benefits: more dev speed, lower ops costs, and a higher elasticity & resiliency in production. Kubernetes is the place to be for cloud native apps. But what to do if you’ve no shiny new cloud native apps but a whole bunch of JEE legacy systems? No chance to leverage the advantages of Kubernetes? Yes you can!
We’re facing the challenge of migrating hundreds of JEE legacy applications of a German blue chip company onto a Kubernetes cluster within one year.
The talk will be about the lessons we've learned - the best practices and pitfalls we've discovered along our way.
A empresa de tecnologia anunciou um novo smartphone com câmera avançada, bateria de longa duração e processador rápido para competir no mercado. O dispositivo custará menos do que os principais concorrentes e estará disponível em várias cores. A empresa espera que o novo aparelho ajude a aumentar sua participação no mercado de smartphones.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help boost feelings of calmness and well-being.
This document outlines a project to optimize an existing service monitoring console (SMC) within a service-oriented architecture framework. The objectives are to investigate data loss issues, compare SMC to an alternative console (TMC), and design an optimized monitoring solution. Key activities include tuning data storage scripts, comparing consoles quantitatively, mapping SMC features to TMC, and improving performance. The timeline outlines tasks like analyzing existing code, creating sample services, and developing enhancements over 16 weeks.
The document describes the settings for clients in a one-system SAP landscape. Each client has a specific role and customizing/development restrictions. The production client is PRD, with other clients like CUST for customizing, SAND for sandbox testing, and QTST/TRNG for integration/training. Changes are transported between clients using client copy or transport requests, following the described processes.
WSO2 Customer Webinar: WEST Interactive’s Deployment Approach and DevOps Prac...WSO2
To view recording please use below URL:
http://wso2.com/library/webinars/2016/06/west-interactives-deployment-approach-and-devops-practices/
For nearly 30 years West Interactive Services has been creating communication solutions that empower enterprises worldwide to strengthen customer engagement. As a customer of WSO2 since 2012, WEST has built solutions using WSO2 API Manager, WSO2 Business Activity Monitor (WSO2 BAM), WSO2 Enterprise Service Bus (WSO2 ESB), WSO2 Data Services Server (WSO2 DSS), WSO2 Application Server and WSO2 Identity Server which facilitate nearly 300 million unique customer interactions each month.
The most recent deployment with WSO2 allows WEST interactive to expose client connections, data sources and application logic through a common protocol and messaging architecture. This is achieved using a combination of WSO2 API Manager, WSO2 ESB, WSO2 DSS, WSO2 Application Server and WSO2 Message Broker. This webinar will discuss the DevOps related theories and practices that have been followed by WEST during the process of designing, building and maintaining this part of the solution. These will address the following areas:
Design process of the solution
Deployment and production hardening practices
Runtime artifacts and lifecycle management
DevOps, virtualization and automation
Troubleshooting and debugging practices
Donnie Prakoso, Technology Evangelist, ASEAN, AWS.
Container technology provides unparalleled improvements in efficiency and agility of packaging and deploying applications. Containers offer VM-like isolation and process-like efficiency and hence are becoming the de-facto method for deploying micro-services. However, using containers for running services at scale has required that operations team handle complex, dynamically changing infrastructure requirements, or run the risk or under- or over-provisioning infrastructure. Sounds like going back to the days before Cloud? In this session, learn how AWS services for containers take the pain out of managing infrastructure, and best practices for developing new services rapidly while running them at scale.
The Enterprise IT Checklist for Docker Operations Nicola Kabar
Enterprises often have hundreds of legacy applications developed by development teams across multiple business units. This presents a series of challenges to IT teams as they architect and support a complex and diverse IT environment. Add to that Docker, containers, and cloud - going beyond the pilot environment to production requires both the technology and best practices. In this session, we will go through a checklist of considerations and best practices providing a framework for smooth Docker production operations.
Learn how SQL Server on AWS gives you complete control over every setting, without the maintenance, backup and patching requirements of traditional on-site solutions. Discover how to provision and monitor your SQL Server databases in both Amazon RDS and Amazon EC2, and how to optimise scalability, performance, availability, security and disaster recovery.
Tackle Containerization Advisor (TCA) for Legacy ApplicationsKonveyor Community
Recording of presentation: https://youtu.be/VapEooROERw
With the adoption of cloud services and the reliability and resiliency it offers, enterprises are eager to understand how many of their legacy applications can be containerized.
We propose Tackle Containerization Advisor (TCA), a framework that provides a containerization advisory for legacy applications.
Given an application description in terms of its technical components, TCA proposes a multi-step process that standardizes the raw inputs and curates technology stack into various components, detects missing components and finally recommends the best possible containerization approach.
Presenter: Anup Kalia, Research Staff Member @ IBM Research
GitHub: https://github.com/konveyor/tackle-container-advisor
Modernizing Testing as Apps Re-ArchitectDevOps.com
Applications are moving to cloud and containers to boost reliability and speed delivery to production. However, if we use the same old approaches to testing, we'll fail to achieve the benefits of cloud. But what do we really need to change? We know we need to automate tests, but how do we keep our automation assets from becoming obsolete? Automatically provisioning test environments seems close, but some parts of our applications are hard to move to cloud.
The document discusses BloomReach's efforts to scale their data infrastructure to support hundreds of millions of documents. They implemented an elastic infrastructure called BC2 that dynamically provisions and scales Solr and Cassandra clusters in the cloud on demand. This allows each pipeline or job to have isolated resources, improves performance and stability over sharing clusters, and provides cost savings through only provisioning necessary resources.
Client/server computing is an architecture where thin client machines make requests to centralized servers for applications and data. A basic definition is that a client makes a request for data from a server, which then returns the results. The major focus in client/server systems is on software, with most application processing done on the client side and services like databases accessed from the server side. Common types of servers include file servers, data servers, compute servers, database servers, and communication servers.
Azure Container Apps provides a serverless platform for building and deploying containerized microservices applications that automatically scale based on events, with the ability to use any programming language or framework. It integrates with open source tools like KEDA for event-driven autoscaling and Dapr for service invocation and state management to simplify building distributed microservices architectures in the cloud. The document demonstrates how to build a serverless retail application using Azure Container Apps, Cosmos DB, and Service Bus with .NET microservices that scale independently based on events.
Simplify and Scale Enterprise Spring Apps in the Cloud | March 23, 2023VMware Tanzu
- Azure Spring Apps is a fully managed service for deploying and managing Spring Boot apps in the cloud without having to learn or manage Kubernetes. It provides auto-scaling, security, high availability, and auto-patching capabilities.
- Managing software updates and security patches across multiple components like apps, dependencies, JDKs, OSes, Kubernetes, etc. is challenging due to the large volume of updates and need for testing and approvals. Azure Spring Apps reduces this burden through auto-patching which applies critical security updates automatically during scheduled maintenance windows.
- Auto-patching helps customers stay ahead of security threats and vulnerabilities by proactively applying patches for exposed issues like Log4j, OpenSSL vulnerabilities,
Just over a year ago (before becoming the full time chair and advocate of QCon London, San Francisco, and New York), my main role was with HPE as the principal architect for a client in the US public sector.
The systems we supported were responsible for personnel information, scholarships decisions, and record management. Like so many others, we were also faced with legacy applications, COTS product integrations, polyglot code bases, and often brittle deployments. In an effort to decouple code bases and address some of these issues, we started advocating for a Microservice architecture and trying to distinguish it from the SOA practices of the past.
Now, it’s a year later. I have had the incredible opportunity to have access to architects, engineers, and leaders from some of the world’s more respected software companies. These are companies like Uber, Microsoft, Netflix, Apple, Google, Slack, Pinterest, and Etsy. I’ve had the chance to have one-on-one discussions with Chief Architects, developers, and engineers building the apps I most admire and use every day (some leveraging Microservices, some embracing Monoliths, and others falling somewhere in between).
Patterns & Practices of Microservices is some of the things I wish I knew before beginning a push towards Microservices just over a year ago. It’s the practices of companies leveraging Microservices, it’s the technology tradeoffs when deciding between Monoliths and Microservices, and it’s the advice I’ve heard in interviewing, podcasting, and iterating on presentations from software giants like Adrian Cockcroft, Matt Ranney, Josh Evans, Martin Thompson, and literally hundreds of other engineers who drop knowledge at QCons around the world.
This is a information-packed presentation on data migration made by BWIR, global solutions and services partner to SolidWorks Enterprise PDM. This was showcased at SolidWorks World 2011 and the presentation talks about data migration from other PDM/PLM systems to SolidWorks EPDM.
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with K...confluent
Microservices, events, containers, and orchestrators are dominating our vernacular today. As operations teams adapt to support these technologies in production, cloud-native platforms like Cloud Foundry and Kubernetes have quickly risen to serve as force multipliers of automation, productivity and value. Kafka is providing developers a critically important component as they build and modernize applications to cloud-native architecture. This talk will explore:
• Why cloud-native platforms and why run Kafka on Kubernetes?
• What kind of workloads are best suited for this combination?
• Tips to determine the path forward for legacy monoliths in your application portfolio
• Running Kafka as a Streaming Platform on Container Orchestration
Patterns and Pains of Migrating Legacy Applications to KubernetesQAware GmbH
Open Source Summit 2018, Vancouver (Canada): Talk by Josef Adersberger (@adersberger, CTO at QAware), Michael Frank (Software Architect at QAware) and Robert Bichler (IT Project Manager at Allianz Germany)
Abstract:
Running applications on Kubernetes can provide a lot of benefits: more dev speed, lower ops costs and a higher elasticity & resiliency in production. Kubernetes is the place to be for cloud-native apps. But what to do if you’ve no shiny new cloud-native apps but a whole bunch of JEE legacy systems? No chance to leverage the advantages of Kubernetes? Yes you can!
We’re facing the challenge of migrating hundreds of JEE legacy applications of a German blue chip company onto a Kubernetes cluster within one year.
The talk will be about the lessons we've learned - the best practices and pitfalls we've discovered along our way.
Patterns and Pains of Migrating Legacy Applications to KubernetesJosef Adersberger
Running applications on Kubernetes can provide a lot of benefits: more dev speed, lower ops costs, and a higher elasticity & resiliency in production. Kubernetes is the place to be for cloud native apps. But what to do if you’ve no shiny new cloud native apps but a whole bunch of JEE legacy systems? No chance to leverage the advantages of Kubernetes? Yes you can!
We’re facing the challenge of migrating hundreds of JEE legacy applications of a German blue chip company onto a Kubernetes cluster within one year.
The talk will be about the lessons we've learned - the best practices and pitfalls we've discovered along our way.
A empresa de tecnologia anunciou um novo smartphone com câmera avançada, bateria de longa duração e processador rápido para competir no mercado. O dispositivo custará menos do que os principais concorrentes e estará disponível em várias cores. A empresa espera que o novo aparelho ajude a aumentar sua participação no mercado de smartphones.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help boost feelings of calmness and well-being.
This document discusses an SAP support pack. SAP provides support packs to help customers install patches, fixes, and enhancements for their SAP systems. Support packs bundle these software fixes and improvements into packages to simplify and streamline the implementation process for customers.
The document describes how to monitor an SAP system using the Computing Center Management System (CCMS), which allows monitoring of components like the R/3 application servers, database, and operating system. It provides details on the monitoring architecture and tools for monitoring specific aspects of the system like users, workloads, buffers, and the database. Critical tasks for monitoring the system are also listed, such as checking backups, application server status, alerts, logs, jobs, locks, and resolving any issues.
This document outlines 10 different client copy functions: SCC1-SCC9 and SCCL. These functions include special client copy selections, client transport, client copy logs, client administration, client deletion, client import, client import post processing, client export, remote client copy, and local client copy.
Background processing reduces the load on dialog work processes by scheduling regular activities to run in the background. A background job consists of one or more steps, which can be an ABAP program, external command, or external program. Jobs are assigned priorities and can be triggered by time or event. The Job Wizard provides an easy way to define a job with general information and start conditions. Job monitoring displays job status and logs.
The document discusses the print and spool system in SAP. It describes the main tasks of the spool system as processing and administering print requests as well as managing output devices. It then explains the information flow from creating a document to printing. Finally, it outlines different access methods for local, remote, and frontend printing and how spool and output requests can be monitored.
The document appears to be a diagram showing night and day processing flows with various steps numbered 1 through 12 across two parallel tracks for dialog and BTC. The night processing occurs first with 11 steps while the day processing has fewer steps and refers back to the night processing.
The document discusses user authorization in SAP systems, explaining that user master records must be set up and assigned roles before users can access the system. A user's menu and authorizations are linked to their user master record via roles, and the user master record stores all user data required for system access across eight categories. Central user administration allows creation and maintenance of all user master data to be performed in a single SAP system.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
The document outlines the steps for installing and setting up an SAP R/3 system, including installing hardware and networking components, the operating system, database, and SAP software. It describes requirements for hardware, software, and networking and recommends following SAP's installation checklist. The document also provides details on directory structures for Oracle databases and SAP instances. Post-installation steps involve configuring transport management, importing profiles, and obtaining a license key from SAP.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise boosts blood flow, releases endorphins, and promotes changes in the brain which help enhance one's emotional well-being and mental clarity.
The document provides an overview of SAP Basis functions and the SAP R/3 system architecture. It describes how when a user sends a request to SAP: 1) It is assigned to a work process by the dispatcher. 2) The work process executes the transaction steps and communicates with the database server. 3) The response is returned to the user via the presentation layer, completing the transaction processing. It also differentiates the various work processes like dialog, update, batch, and spool processes and their roles in transaction handling.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
1. CTS & Transport System
The Change and Transport System (CTS) is a
tool that helps you to organize development
projects in the ABAP Workbench and in
Customizing, and then transport the changes
between the SAP Systems in your system
landscape.
2. Data Structure of an R/3 System
Client 000 Client <nnn>
Appl. Appl.
data data
User
User
. . .
Customizing Customizing
Cross-client Customizing
R/3 Repository
3. Types of Adaptation
Customizing
Appl. Appl.
data data
User
User
. . . .
Customizing Customizing
Customizing
Cross-client Customizing
R/3 Repository
Development
Modifications
Customer enhancements
4. Consequences: Software Logistics in R/3
Appl. Appl.
data data
User
User
Different clients for:
Execution
Customizing Customizing
Testing
Cross-client Customizing Productive usage
R/3 Repository of Customizing
Separate R/3 Systems for customer in-house development
and for changes made to the R/3 Repository
DEV QAS PRD
5. TMS: Administering Your R/3 Systems
Delete Approve
Domain Controller
RFC RFC
RFC
Transport Directory
Transport group
Transport domain
6. TMS: Configuring Transport
Routes
Delivery
routes
Consolidation
routes To insert new systems into the
configuration, use drag and drop
Standard To define transport routes, insert
transport arrows and choose type of
transport route
layer Distribute and activate the new
configuration
7. Summary: Setting up an R/3 Transport
Landscape
1. Make the transport directory available.
2. Configure the transport domain controller and
define the domain.
3. Configuration of the transport program (tp) is
done automatically and must not be done at OS
level.
4. In the TMS:
- Include all remaining systems in the domain
- Define the transport routes
- Define QA approval procedure
5. Set the system change options according to the
role of the R/3 System.
6. Create clients and set the client change options
for the production system, development system,
and so on.
8. Customizing Procedure
Perform customizing
Settings are assigned to
a Customizing request
Automatic assignment
to a task
Customizing finished? Release task
Release change
request
Export Transport Directory
9. Transport Process: Import into Quality
Assurance
Development Quality Assurance Production
CR1 CR1
Import change request
Buffers: DEV
Fill PRD Data File
Files
QAS CR1
Buffer CR1
CR1
(inactive) PRD (CR1)
10. Development Quality Assurance Production
CR1 CR2 CR1 CR2 CR1 CR2
OK
Set entries active Import both requests
Buffers: DEV
Files
Files
QAS CR2
CR1
PRD CR1, CR2
11. Request Project Import all
1 DEVK900016 DEV_P00001 requests
2 DEVK900018
3 DEVK900020
4 DEVK900023
5 DEVK900002
6 DEVK900033 DEV_P00001
7 DEVK900035
Date/deadline Execution Options
Import
To import a single request,
request
use the other icon:
...
12. Development Quality Assurance Production
OK
TMS QA approval procedure
Editor's Notes
The R/3 System consists of various data types. Certain types of data are only accessible from a particular client. Such data types include business application data (documents, material master records, and so on) as well as most Customizing settings. These settings: Define the customer's organizational structures (distribution channels, company codes, and so on) Adjust the parameters of R/3 transactions to fit customer-specific business operations Client-specific data types are closely interdependent. Thus, when application data is entered, the system checks whether the data matches the client's Customizing settings. If there are inconsistencies, the application data is rejected. Therefore, application data usually only makes business sense in its specific Customizing environment. In addition to client-specific data, R/3 can have other settings that, once defined, are valid for all clients. This data includes: Cross-client Customizing, such as printer settings The R/3 Repository, which contains all objects in the R/3 Dictionary (tables, data elements, and domains), as well as all ABAP programs, menus, CUAs, and so on In the case of cross-client settings, an ABAP report that was originally developed in a certain client may be immediately usable in another client.
Corresponding to the various data types in the R/3 System, there are various types of changes and adjustments to data. The R/3 System is delivered in standard form and must be adjusted to the customer's requirements during the implementation phase. This procedure is called Customizing. As shown in the graphic, Customizing includes both client-specific and cross-client Customizing data. An R/3 upgrade may require a limited amount of additional Customizing. Unlike Customizing, enhancements or adjustments to the R/3 Repository are not required to operate an R/3 System. To adapt the R/3 Repository to a customer's requirements, the customer can develop in-house software. In addition, customer enhancements can be added to the R/3 Repository. In this case, customer-defined objects are used to complement the SAP delivery standard. The precise locations where enhancements can be inserted are specified by SAP. Finally, R/3 objects such as reports and table definitions can be modified directly. In this case, the R/3 Repository delivered by SAP is not merely enhanced; it is changed. During the next R/3 upgrade, these modifications may therefore need to be adjusted before being incorporated into the new Repository. The adjustment can be a time-consuming process.
Due to the R/3 System features described above, the type and number of clients and R/3 Systems are subject to the following requirements. You should not perform Customizing in the production client. For this reason, every implementation of R/3 requires several clients. For larger R/3 installations, different parts of a Customizing project may need to be tested jointly in a separate client. Production operation ultimately requires yet another, final client. At the technical level, the distribution of these clients (as well as any other clients,) across the R/3 System depends on whether you make changes to the R/3 Repository. If you make changes, the development and production environment must be subdivided and distributed across several different R/3 Systems. Otherwise, ABAP programs that were created in the development client, but still need to be tested, would immediately become available in the production client. This would cause serious security and performance problems. Therefore, if you plan to make any changes to the R/3 Repository, we recommend that you install at least two and preferably three R/3 Systems. You can use the additional R/3 System for mass testing and for quality assurance. In summary: Customizing settings must be transported between clients. Changes to the R/3 Repository must be transported between R/3 Systems.
To create a transport domain, call the TMS from Client 000. To automatically define the transport domain controller as as the current system, choose Tools Administration Transports Transport Management System . As soon as the domain has been created, additional systems can apply for acceptance by the domain. For security reasons, these systems are not accepted until they have been authorized by the transport domain controller. The TMS System Overview displays the various system statuses: Waiting for acceptance by the domain Active System locked for the TMS System not accepted System deleted Technically, TMS can connect systems with different R/3 release statuses. However, SAP does not support any transports between such systems. Because of its central importance, the transport domain controller should run on an R/3 System with a high availability.
To configure the transport routes between systems in the domain, use the hierarchical list editor and graphical editor provided by the TMS. Define these settings in the transport domain controller. The transport routes can be either consolidation or delivery routes. For consolidation routes, a transport layer is used, for example to define a route between the development and the quality assurance system. Delivery routes connect systems, for example the quality assurance and the production system. They do not use transport layers. Create transport routes in the graphical editor using drag and drop. After the transport routes have been configured in the transport domain controller, they can be distributed across all systems in the domain. These setting must be activated in all the systems in the domain. This can also be done centrally by the transport domain controller. To enable previous configurations to be reused, you can create versions in the TMS.
The steps for setting up a transport landscape are summarized below. To set up an R/3 transport landscape: Make a transport directory available to every R/3 System that will be transporting. The TMS allows a local transport directory for every R/3 System. To configure the TMS, define the transport domain controller. In the TMS: Include all remaining systems in the domain. Define the transport routes. Set the system change options according to the role of the R/3 System. Create clients in every R/3 System and set the client change options (production system, development system, and so on).
When a Customizing transaction is executed and the settings are saved, the settings are recorded by the Customizing Organizer. These changes are assigned to a change request. Either this request already exists (although it need not yet have been released) or it is created by the user. In this change request, the changes are saved in the user task. This assignment of changes occurs automatically using the user name. As soon as the required settings have been made, you can release the task. When a task is released, documentation can be created to describe the type of change and the reasons for it. After all tasks belonging to a request have been released, the change request can be released. Normally, with this release, the objects are exported to the transport directory, in whichever form they exist in the database at that specific time. Both during the export and during the concluding import into the target system (using the TMS), you should check the transport. The Transport System reports errors using return codes: 0 signifies an error-free transport step. 4 is a warning. 8 or greater signifies an error that requires postprocessing.
Using TMS from within R/3, the second step in the transport process is importing all requests listed in the import queue of the quality assurance system (QAS). TMS starts the transport control program tp at the operating system level. After the successful import into the quality assurance system, the requests are placed in the import buffer and import queue of the production system (PRD), where they are inactive at first.
All thoroughly tested and verified requests that have been imported into the quality assurance system are ready for import into the production system (PRD). Using TMS, you can import all requests (or just the first set of verified requests) listed in the production system import queue in the correct sequence. To ensure that production activities in PRD are not disturbed, ensure that errors and their corrections are imported in the correct order.
To prevent change requests from being imported unintentionally, SAP recommends closing the import queue to set the end mark before performing the import. To import all requests in the present queue, choose Queue Start import . A dialog box is displayed: enter the target client and choose Continue . Imports can be started from any R/3 System in the transport domain. If you start the import from another R/3 System in the transport domain, a logon window from the target system is displayed. After providing valid logon information, TMS starts the transport control program tp in the target system. Parameter Execution determines whether TMS starts tp synchron or tp asynchron. In the latter case, tp continues working in the background so that your session is not blocked for the duration of the import. As long as the import is running, this is indicated in the import overview. After the import, the queue is opened again automatically by removal of the end mark. After change requests have been imported, they are marked for import into subsequent systems. If a QA approval procedure is configured, all requests are set inactive. The transport route configuration specifies which change requests are automatically forwarded to which target systems. If an import is started with some inactive requests, this is detected by the TMS and the import is rejected. This complete import (Import all) ensures that objects in earlier change requests that were corrected in subsequent change requests are replaced by the corrected objects during import. Thus, the incorrect objects do not affect your productive environment. For each system, you can deactivate complete import with tp parameter NO_IMPORT_ALL. To import single requests, use tp import <transport request>.
The TMS QA approval procedure increases the quality and the availability of the production systems by letting you check requests in the quality assurance system before they are delivered to subsequent systems. The system for which the QA approval procedure is activated is called the QA system. When the QA approval procedure is activated, transport requests are only forwarded to the delivery systems if all the QA approval steps are processed for each request in the QA system, and each request has been approved. (When you configure the QA system, you determine how many QA approval steps have to be processed for each request.) The request is only aproved if all the approval step checks are successful. You can only import completely approved requests into the delivery systems. Rejected requests are not imported into the delivery systems of the QA system.