The document describes end-to-end root cause analysis capabilities in SAP Solution Manager. It provides an overview of tools for workload analysis, change analysis, exception analysis, and trace analysis that can isolate problems across systems and technologies. These tools aggregate and correlate performance data, changes, exceptions, and traces from different systems to help identify the root cause of issues. The tools have a common navigation paradigm and are designed to simplify problem resolution and reduce support costs.
OnTune is a proprietary software solution of TeemStone, based on the precision monitoring and quick analysis. It provides an optimal solution and quick determination of the root cause, during any performance issue or in case of failure that a system administrator may face during work. It enables a historical performance analysis of the operating system by using real time collection of performance data.
What is Artwork?
Artwork is a simplified collaboration platform for managing artworks in the most effective manner, reducing turnaround time and minimizing the possibility of mistakes.
Why Artwork?
KLD preparation to print proofing
Reminders & escalations
Approval management
Version management
Artwork for industries
1. Pharmaceutical Industries
2. Manufacturing Industries
3. Engineering Industries
4. Healthcare Industries
5. FMCG
Key Features :
1. Approval workflows - Customized workflows and approval matrices for all artworks to gain instant visibility of all user tasks, keeping them on track.
2. Collaboration & Proofing tools - Compare the digital versions of artwork and eliminate errors in printed packaging and product recalls.
3. Integrations - Integrate it with legacy systems without any hassle.
4. Repository - The central repository for all the artworks including past versions. Search for anything instantly.
5. Reports and Analytics - Get visibility across the platform with progress reports and get the artwork created in lesser time.
Benefits
1. Mobile & web-based approval & process information
2. Collaboration of all stakeholders on a single platform
3. Live tracking of all artwork processes
4. Elimination of human errors
5. Access to the latest, approved artworks
6. Automated reminders & escalations
“Performance testing is the process by which software is tested to determine the current system performance. This process aims to gather information about current performance, but places no value judgments on the findings".
SharePoint 2013 DR solution:
An overview of a workable solution
for mid-size Enterprises
An example of implementation and
DR Documentation content
Outline:
- Business Requirements
- Recovery Time Objective (RTO) and Recovery Point
- Objective (RPO)
- Prerequisites
- Activation Scenarios
- Schedule of events (workflows)
- Logical System overview
- Escalation matrix
- DR procedures
- Health checks
- DR validation exercise
- Event Summary and logs
Discusses the tools for expanding Chromeleon Chromatography Data System from a standalone workstation to a networked installation in your entire enterprise
Learn more: http://www.thermoscientific.com/en/about-us/general-landing-page/chromeleon-resource-center.html?ca=chromeleon
Product Information - Fuse Management Central 1.0.0antonio.carvalho
Fuse Management Central is an administration platform for OpenText Content Suite/Extended ECM, enabling a centralized management of system while monitoring its components.
Due to its architecture, it separate system administration from business administration, introducing a new layer of security on OpenText Content Suite administration.
Conventional Software Management: The waterfall model, conventional software Management performance. Evolution of Software Economics: Software Economics, pragmatic software cost estimation. Improving Software Economics: Reducing Software product size, improving software processes, improving team effectiveness, improving automation, Achieving required quality, peer inspections.
The old way and the new: The principles of conventional software Engineering, principles of modern software management, transitioning to an iterative process. Life cycle phases: Engineering and production stages, inception, Elaboration, construction, transition phases. Artifacts of the process: The artifact sets, Management artifacts, Engineering artifacts, programmatic artifacts. Model based software architectures: A Management perspective and technical perspective.
Work Flows of the process: Software process workflows, Iteration workflows. Checkpoints of the process: Major mile stones, Minor Milestones, Periodic status assessments. Iterative Process Planning: Work breakdown structures, planning guidelines, cost and schedule estimating, Iteration planning process, Pragmatic planning.
Project Organizations and Responsibilities: Line-of-Business Organizations, Project Organizations, evolution of Organizations. Process Automation: Automation Building blocks, The Project Environment.
Project Control and Process instrumentation: The seven core Metrics, Management indicators, quality indicators, life cycle expectations, pragmatic Software Metrics, Metrics automation. Tailoring the Process: Process discriminants.
Future Software Project Management: Modern Project Profiles, Next generation Software economics, modern process transitions.
International Logistics & Warehouse Management Thomas Tanel
This presentation is designed to take an astute quick look at international logistics and warehouse management, both in terms of today's global supply chain and in the demand flow management process, so you can know how to make the most of this strategically. You've probably heard something about these topics. You may even be somewhat familiar with them. But how much do you really know about their strategic importance?
In an international logistics and warehouse management system, cost-to-cost "trade-offs" available through systems analysis are easy to identify. One example is using premium transportation for small, time-phased purchased lots to reduce inventory investment and lower safety stock. Another might be using a distribution center for freight consolidation or Crossdocking to improve customer service levels and avoid material handling inefficiencies. Yet another might be the use of a blanket agreement (with a rolling forecast) with your supplier. By aligning supplier capacity to your customer schedules and your inventory goals, you gain pipeline visibility through automated order tracking and alerts in addition to lowering costs and raising customer service levels. The overall goal, to achieve a fully integrated logistics approach, is to realize maximum trade-offs among basic functional activities such as warehousing.
Traditional Logistics and Warehousing channels are indeed changing. As organizations move from mass production and mass distribution to lean manufacturing, postponement, and mass customization, creative approaches are needed in the management of logistics and warehousing. The challenge is always present, because different customers may demand different levels of service. Demand often cannot be forecasted, especially if one must deliver customized products or services exactly where the customer needs them on a global scale at multiple locations.
Businesses today must understand that they are competing on the basis of time more than on any other factor. The rigors of international logistics require that you take action to meet your customers’ demand for faster, more frequent, and more reliable deliveries. Your suppliers need to meet increasingly precise inbound schedules. Tomorrow’s customers are more likely to be in another country or continent than they are likely to be from across town, in another state, or in another province. In addition, diverse countries use different formats for weights and other units of measures, as well as many countries and localities have different licensing requirements and charge different duties, value-added taxes (VAT), and fees, which altogether amount to a major content-management challenge for your Global Trade and Logistics IT systems.
OnTune is a proprietary software solution of TeemStone, based on the precision monitoring and quick analysis. It provides an optimal solution and quick determination of the root cause, during any performance issue or in case of failure that a system administrator may face during work. It enables a historical performance analysis of the operating system by using real time collection of performance data.
What is Artwork?
Artwork is a simplified collaboration platform for managing artworks in the most effective manner, reducing turnaround time and minimizing the possibility of mistakes.
Why Artwork?
KLD preparation to print proofing
Reminders & escalations
Approval management
Version management
Artwork for industries
1. Pharmaceutical Industries
2. Manufacturing Industries
3. Engineering Industries
4. Healthcare Industries
5. FMCG
Key Features :
1. Approval workflows - Customized workflows and approval matrices for all artworks to gain instant visibility of all user tasks, keeping them on track.
2. Collaboration & Proofing tools - Compare the digital versions of artwork and eliminate errors in printed packaging and product recalls.
3. Integrations - Integrate it with legacy systems without any hassle.
4. Repository - The central repository for all the artworks including past versions. Search for anything instantly.
5. Reports and Analytics - Get visibility across the platform with progress reports and get the artwork created in lesser time.
Benefits
1. Mobile & web-based approval & process information
2. Collaboration of all stakeholders on a single platform
3. Live tracking of all artwork processes
4. Elimination of human errors
5. Access to the latest, approved artworks
6. Automated reminders & escalations
“Performance testing is the process by which software is tested to determine the current system performance. This process aims to gather information about current performance, but places no value judgments on the findings".
SharePoint 2013 DR solution:
An overview of a workable solution
for mid-size Enterprises
An example of implementation and
DR Documentation content
Outline:
- Business Requirements
- Recovery Time Objective (RTO) and Recovery Point
- Objective (RPO)
- Prerequisites
- Activation Scenarios
- Schedule of events (workflows)
- Logical System overview
- Escalation matrix
- DR procedures
- Health checks
- DR validation exercise
- Event Summary and logs
Discusses the tools for expanding Chromeleon Chromatography Data System from a standalone workstation to a networked installation in your entire enterprise
Learn more: http://www.thermoscientific.com/en/about-us/general-landing-page/chromeleon-resource-center.html?ca=chromeleon
Product Information - Fuse Management Central 1.0.0antonio.carvalho
Fuse Management Central is an administration platform for OpenText Content Suite/Extended ECM, enabling a centralized management of system while monitoring its components.
Due to its architecture, it separate system administration from business administration, introducing a new layer of security on OpenText Content Suite administration.
Conventional Software Management: The waterfall model, conventional software Management performance. Evolution of Software Economics: Software Economics, pragmatic software cost estimation. Improving Software Economics: Reducing Software product size, improving software processes, improving team effectiveness, improving automation, Achieving required quality, peer inspections.
The old way and the new: The principles of conventional software Engineering, principles of modern software management, transitioning to an iterative process. Life cycle phases: Engineering and production stages, inception, Elaboration, construction, transition phases. Artifacts of the process: The artifact sets, Management artifacts, Engineering artifacts, programmatic artifacts. Model based software architectures: A Management perspective and technical perspective.
Work Flows of the process: Software process workflows, Iteration workflows. Checkpoints of the process: Major mile stones, Minor Milestones, Periodic status assessments. Iterative Process Planning: Work breakdown structures, planning guidelines, cost and schedule estimating, Iteration planning process, Pragmatic planning.
Project Organizations and Responsibilities: Line-of-Business Organizations, Project Organizations, evolution of Organizations. Process Automation: Automation Building blocks, The Project Environment.
Project Control and Process instrumentation: The seven core Metrics, Management indicators, quality indicators, life cycle expectations, pragmatic Software Metrics, Metrics automation. Tailoring the Process: Process discriminants.
Future Software Project Management: Modern Project Profiles, Next generation Software economics, modern process transitions.
International Logistics & Warehouse Management Thomas Tanel
This presentation is designed to take an astute quick look at international logistics and warehouse management, both in terms of today's global supply chain and in the demand flow management process, so you can know how to make the most of this strategically. You've probably heard something about these topics. You may even be somewhat familiar with them. But how much do you really know about their strategic importance?
In an international logistics and warehouse management system, cost-to-cost "trade-offs" available through systems analysis are easy to identify. One example is using premium transportation for small, time-phased purchased lots to reduce inventory investment and lower safety stock. Another might be using a distribution center for freight consolidation or Crossdocking to improve customer service levels and avoid material handling inefficiencies. Yet another might be the use of a blanket agreement (with a rolling forecast) with your supplier. By aligning supplier capacity to your customer schedules and your inventory goals, you gain pipeline visibility through automated order tracking and alerts in addition to lowering costs and raising customer service levels. The overall goal, to achieve a fully integrated logistics approach, is to realize maximum trade-offs among basic functional activities such as warehousing.
Traditional Logistics and Warehousing channels are indeed changing. As organizations move from mass production and mass distribution to lean manufacturing, postponement, and mass customization, creative approaches are needed in the management of logistics and warehousing. The challenge is always present, because different customers may demand different levels of service. Demand often cannot be forecasted, especially if one must deliver customized products or services exactly where the customer needs them on a global scale at multiple locations.
Businesses today must understand that they are competing on the basis of time more than on any other factor. The rigors of international logistics require that you take action to meet your customers’ demand for faster, more frequent, and more reliable deliveries. Your suppliers need to meet increasingly precise inbound schedules. Tomorrow’s customers are more likely to be in another country or continent than they are likely to be from across town, in another state, or in another province. In addition, diverse countries use different formats for weights and other units of measures, as well as many countries and localities have different licensing requirements and charge different duties, value-added taxes (VAT), and fees, which altogether amount to a major content-management challenge for your Global Trade and Logistics IT systems.
Time Management PowerPoint Slides include topics such as: time wasting culprits and eliminating them, strategizing for time management, techniques of organization, prioritizing, to-do lists, scheduling tips and guidelines, 9 ways to handle drop-in visitors, how to say no responsibly, 5 tips to stop procrastination, managing crisis, 10 ways to clear your desk, controlling paper, 9 techniques to control telephone interruptions, how to's and much more.
Hovitaga OpenSQL Editor is a powerful tool for SAP consultants, ABAP developers and basis administrators that helps to work with the database of an SAP system.
This paper gives an overview of the product.
There are many ways to ruin a performance testing project, there is just a handful of ways to do it right. This publication analyses the most widespread performance testing blunders. It is impossible in one article to expose all the varieties of testing wrongdoings; as such, this publication is definitely an open-ended.
It is especially designed to automate and streamline IT services, help desk and customer support processes. Its also provides an integrated knowledge base with built-in customizable search feature for instant business intelligence features such as alerts, reports and emails.
Exploiting Web Technologies to connect business process management and engine...Stefano Costanzo
The Business Process Model and Notations (BPMN) standard can be used for representing low-level simulation and automation workflows for scientific, engineering and manufacturing process. This poster presents a prototype focused on removing the main obstacles to an adoption of the standard and the related technology caused by insufficient collaboration and data management.
Simplify Data Center Monitoring With a Single-Pane ViewHitachi Vantara
Keeping IT systems up and well tuned requires constant attention, but the task is too often complicated by separate monitoring tools required to watch applications, servers, networks and storage. This white paper discusses how system administrators can consolidate oversight of these components, particularly where DataCore SANsymphony V storage hypervisor virtualizes the storage resources. Such visibility is made possible through the integration of SANsymphony-V with Hitachi IT Operations Analyzer.
Sanjeevi's SDLC Guest Lecture in Anna University campus at AU-PERS Centre (Ye...Sanjeevi Prasad
This presentation was used to train the students studying Embedded Systems in the Anna University campus (AU-PERS Centre) by me (Sanjeevi Prasad). I used to be a guest lecturer here in the year 2003. I was given an appreciation letter by this centre because they saw distinct improvements in the performances of their students in project evaluation by the IT professionals of a well-known IT MNC.
In Iterative model, iterative process starts with a simple implementation of a small set of the software requirements and iteratively enhances the evolving versions until the complete system is implemented and ready to be deployed.
Library Management System using oracle databaseSaikot Roy
Library Management System using oracle database system used PL/SQL
Here provies all the information about oracle dba.It is simple easy,no overhead.
Here no java required nor any other technology.
Association Rule Mining Scheme for Software Failure AnalysisEditor IJMTER
The software execution process is tracked with event logs. The event logs are used to maintain the
execution process flow in a textual log file. The log file also manages the error values and their source of classes.
The error values are used to analyze the failure of the software. The data mining methods are used to evaluate the
quality and software failure rate analysis process. The text logs are processed and data values are extracted from
the data values. The data values are mined using the machine learning methods for failure analysis.
The service error, service complaints, interaction error and crash errors are maintained under the log files.
The events and their reactions are also maintained under the log files. Software termination and execution failures
are identified using the log details. The log file parsing process is applied to extract data from the logs. The
associations rule mining methods are used to analyze the log files for failure detection process. The system uses
the Weighted Association Rule Mining (WARM) scheme to fetch failure rate in the software execution flow. The
system improves the failure rate detection accuracy in WARM model.
Describes a model to analyze software systems and determine areas of risk. Discusses limitations of typical test design methods and provides an example of how to use the model to create high volume automated testing framework.
Similar to End to-end root cause analysis minimize the time to incident resolution (20)
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Monitoring Java Application Security with JDK Tools and JFR Events
End to-end root cause analysis minimize the time to incident resolution
1. End-to-End Root Cause Analysis
Minimize the time to incident resolution
End-to-End Root Cause Analysis in SAP Solution Manager offers capabilities for cross system and
technology root cause analysis. Especially in heterogeneous landscapes it is important to isolate a
problem causing component as fast as possible and involve the right experts for problem
resolution. With the toolset provided by Root Cause Analysis this is possible with the same tool
regardless of the technology an application is based on and it allows a first in depth analysis by a
generalist avoiding the ping-pong game during an analysis between different expert groups.
Contents
Introduction.........................................................1
End-to-End Tools in Root Cause analysis .......3
End-to-End Workload Analysis..........................3
End-to-End Change Analysis ............................4
End-to-End Exception Analysis .........................5
End-to-End Trace Analysis................................6
Availability...........................................................7
Where to find more information ........................8
INTRODUCTION
Customer’s heterogeneous IT landscapes running
mission critical applications have become
increasingly complex during the last decade.
Finding the root cause of an incident in those
environments can be challenging. This creates the
need for a systematic top-down approach to
isolate a component causing the problem. The
approach must be supported by tools, helping
customers to do this as efficiently as possible.
End-to-End Root Cause Analysis provides tools
that support customers and SAP in performing a
root cause analysis across different support levels
and different technologies. The basic idea behind
Root Cause Analysis is to determine where and
why a problem occurred.
A typical example (see Figure 1) is a problem
where an end user experiences a problem while
maintaining his bank account data in the
corporate portal, the cause may be on the client
PC (e.g. browser), in the network or somewhere in
the server environment, which itself might
comprise different instances of vary-ing
technologies. In this example the client request in
question first hits a SAP NetWeaver Portal (based
on SAP AS Java), then reaches a SAP ERP
System (based on SAP AS ABAP) via a RFC call
and finally results in a SQL statement which
retrieves information from the ERP database.
Figure 1: A typical scenario for Root Cause Analysis
The performance problem or functional defect
might have occurred in any of those systems.
SAP’s root cause analysis tools help to identify
the specific system part, which has caused the
error.
When such an issue occurs that affects the
customers productive solution the central goal of
the customer’s IT team is to provide an immediate
corrective action (workaround), which restores
service operations as quickly as possible and
which affects end users minimally, a complete
2. Copyright/Trademark
solution to the issue at hand by isolating the area
of concern.
Additionally, with respect to operation, SAP’s root
cause analysis tools are designed to reduce the
number of resources in each step of the resolution
process. An IT generalist with core competence in
root cause analysis, who involves a Component
Expert, are mostly enough to investigate an issue
and nail it down.
Therefore Root Cause Analysis offers tools for
each task in cross-component (end-to-end
analysis) and component-specific analysis. Per
definition, a cross-component analysis involves
several systems or technology stacks, whereas
component-specific analysis deals with one sys-
tem or technology stack.
Overall, Root Cause Analysis works towards
simplifying the problem resolution process with-in
an IT environment and reducing the total cost of
ownership. Benefits SAP’s tools for Root Cause
Analysis are:
Ensured continuous business availability –
Root Cause Analysis helps to accelerate the
problem resolution process.
Reduced costs for support experts - The
targeted top-down approach of RCA supports
a one step dispatching of issues from an IT
Generalist to a Component Expert.
Reduced license costs - Supporting RCA
Tools offered by SAP are part of the standard
maintenance contract and come at no
additional fee.
One safe access channel to all systems –
Root Cause Analysis provides one safe and
central access channel to the customer’s
landscape. If required, an investigation is
continued on the system in concern using a
predefined support user (SAPSUPPORT), who
is only assigned read-only rights.
Collected data is displayed in unified views,
thereby abstracting data from the underlying
technology stack. This approach supports the
structured top down analysis approach, as
generalists and experts start investigating at
one common point.
Empowers the customer to solve problems
himself - Nobody knows the customer’s
landscape as well as the customer himself.
E2E Root Cause Analysis provides expert
tools which enable a customer to quickly solve
problems, thereby reducing overall resolution
time.
All tools and applications are available in the Root
Cause Analysis Workcenter (see Figure 2) in SAP
Solution Manager.
Figure 2: Root Cause Analysis Workcenter
The tools are grouped in different categories
along the usage in the analysis process, but all
tools share a common navigation paradigm.
Before you start one of the available tools, a
detailed selection allows you to define queries
which group systems. So each user can define
queries containing the systems, hosts or
databases depending on his work focus (e.g. one
query for all production systems, one query
containing all CRM systems and so on). Based on
these queries, you select one or multiple entries
that you want to analyze and then start one tool in
this context.
End-to-End Workload Analysis: If a
customer faces a performance problem, E2E
Workload Analysis might be the tool to start
with. Solution Manager regularly collects
performance information of each system and
makes this data centrally available in the
Solution Manager Work Center RCA. The
Workload Analysis allows to identify general
performance bottlenecks such as sizing
problems, or problems which affect all users
who are using a particular system.
End-to-End Trace Analysis: End-to-End
Trace is a tool for isolating a single user
interaction through a complete landscape and
providing trace information on each of the
involved components for one interaction only,
starting with the user interaction in the browser
and ending with data being committed to the
database. Identifying long running user
requests within a complex system landscape is
the most common use case of E2E Trace
Analysis, but you can also identify which
functional errors have occurred during the
execution of one request. These exceptions
(like a dump) are then attached to the trace. So
the trace can also be used to do functional
testing and ensuring that an activity executed
in one system does not lead to functional
errors in connected systems during the request
execution.
End-to-End Change Analysis If a system
behaves differently after a certain date or
change, E2E Change Analysis is the first tool
3. Copyright/Trademark
to use. It displays changes (e.g. transports and
support package updates, profile parameter
changes,…) that have been applied to a
system within a certain timeframe. The Change
Analysis application is based on a central
Configuration & Change Database (CCDB).
The CCDB provides the foundation for change
analysis and change reporting based on daily
configuration of system, host and database
related configuration data. A Change Analysis
can then be performed on all information
stored in the CCDB. Furthermore it is possible
to compare different systems and generate a
report which contains the results. This
approach identifies the problem by comparison
rather than by drilling down, which is faster and
easier in most cases.
End-to-End Exception Analysis E2E
Exception Analysis provides unified access to
exceptions reflecting in high severity log
entries and dumps. Exception Analysis
provides the basis for statistical analysis on
exception in the landscape, but also allow to
access component specific log and dump
viewers directly with a jump-in to the
appropriate tool on the managed system.
System Analysis: In case of Java based
system the Wily transaction trace can be used
to identify which part of a request in a Java
environment caused the problem. Wily
Introscope is shipped by SAP with
preconfigured dash boards, offering dedicated
views for the SAP Application Server Java. A
deep analysis of Java problems is possible in
the investigator mode, which displays detailed
performance metrics. Several other system
analysis tools as Thread Dump Analysis,
Change Reporting and Central LogViewer are
provided.
Host Analysis: With the Host Analysis section
of the RCA Work Center it is possible to
analyze the most important OS metrics like
CPU, memory, paging, network and disk/file
system. In addition the filesystem browser
allows a central read-only access to predefined
directories on the managed system without
having to logon to the console of the managed
system. In the same manner it is also possible
with the OS Command Console, to execute
predefined, non destructible commands (like a
ping, netstat, iostat and so on). Both tools work
ensure that even whilst access to the managed
system is granted, no change can be made, no
business data can be accessed and not
harmful commands can be executed.
Database Analysis: The DB Analysis
summarizes DB Performance Warehouse
specific metrics across all supported DB types.
This also includes standalone DB’s that are
connected to the DBA Cockpit within Solution
Manager
Client side Analysis: If the outcome of an
analysis is that the problem is located on the
client side, SAP offers the tool BMC AppSight
to analyze the problem further. At least one
resource should exist in the customers IT
department who received a special training for
this tool. BMC AppSight is free of charge if only
used in combination with SAP systems and
with recording profiles offered by SAP.
END-TO-END TOOLS IN ROOT CAUSE
ANALYSIS
All the End-To-End tools available in SAP Solution
Manager are built on the same infrastructure and
follow a common navigation approach.
First, the infrastructure: Data is collected from the
managed system using multiple channels. For
ABAP based systems a direct communication
channel from the SAP Solution Manager system
towards the managed system exist with the RFC
connections. For all other system types data is
collected via CA Wily Introscope. Introscope is an
essential part and available free of charge in the
so called Right To View (RTV) license. This allows
customers to use the SAP delivered
instrumentation and dashboards without having to
buy a separate license.
In addition, the Diagnostics Agent is used to
collect additional data, like files from the file
system, for all system types.
Second, the navigation: All the tools share a
common principle. You start with an overview
which shows you the most important information
in a condensed format. Which data finally gets
displayed is defined by a timeframe selection.
More detailed information is always available with
a drill down. Most of the data presented in the
tools is stored in the BW system which is
integrated in SAP Solution Manager. The whole
data extraction, aggregation and deletion process
is managed by the so called Extractor Framework
(EFWK) and no manual interaction is needed. The
EFWK also has an integrated resource control to
ensure that neither the Solution Manager nor the
managed system are overloaded with the data
extraction. Also the housekeeping is managed by
the EFWK, so that the data growth keeps at a
passable level.
Now to the tools in detail…
End-to-End Workload Analysis
The End-to-End Workload Analysis helps you to
get workload information of your complete system
landscape in order to analyze overall performance
bottlenecks in your solution. There are different
monitors and analysis tools that provide you with
key performance indicators for the different
4. Copyright/Trademark
components. Most commonly, an initial check of
overall workload is done to check the overall
workload.
Therefore the workload overview screen
summarizes the most important performance
KPI’s independent of the technology the system is
based on (see Figure 3)
Figure 3: Workload Analysis Overview
In addition to the overview, there are different
view types available which display the data in
different ways.
For example the Scatter view (see Figure 4)
displays the data with the Average Response
Time on the x-axis and the Accumulated
Response Time on the y-axis. Similar to the Time
Profile, data points will be displayed as hourly
average values for the chosen timeframe.
In interpreting the diagram, one can structure the
displayed area roughly in for quadrants,
distinguishing between low and high Average
Response Times on the x-axis and low and high
Accumulated Response Times. Then, the top right
quadrant with both high Average Response Time
and high Accumulated Response Time can be
regarded as a Critical Quadrant. Values in here
have the strongest impact on the system they
were measured on.
Figure 4: Scatter Chart
From here you can do the drill down to product
specific KPI’s. In the following diagram, you see
an example of the structuring for an ABAP based
system:
Figure 5: ABAP Workload Summary
By utilizing this drilldown you can find what is
causing the general performance bottleneck and
start foolow up activities. It is also possible with
standard BW functionality to further drill down in
the detailed metric views by right clicking on a
column and choosing from the list of available
drill-down options (like in Figure 6).
Figure 6: Drill-down options
End-to-End Change Analysis
Changes are a common starting point for
problems in your landscape. Imagine that
something worked yesterday, but it is not working
anymore today. The first question would be what
changed in between?
E2E Change Analysis is the tool to analyze
problems like this. Again, the tool starts with an
overview on the overall changes which have been
applied to the landscape (see Figure 7). You can
use the timeframe mechanism, to limit the
timeframe to the point when the problem occurred
first.
Figure 7: Change Analysis Overview
From here, the Change Analysis application
allows you again to drill down to the type of
changes and from there to the actual changes
itself. The available changes are categorized
(again depending on the system type) and then
displayed like in Figure 8 (example for an ABAP
5. Copyright/Trademark
system). Here you can easily identify that on a
certain date a lot of notes have been implemented
in a system, or that on another day parameter
changes have been applied to the system.
Figure 8: Change Analysis Categories
With a further drill down it’s then possible to
identify the changes in detail and also get a
history on the single change in the change
reporting application.
Figure 9: Change Reporting with history
In addition to the time based analysis, it is
possible to compare the configuration of two
systems against another one. By this it is possible
for example if you have a problem in your
production system which cannot be reproduced in
the quality assurance system, what the
configuration differences between the two
systems are.
End-to-End Exception Analysis
The E2E Exception Analysis follows the same
approach than the other two previously described
tools but displays all kinds of exception in
managed systems. This starts with dumps in an
ABAP system but also includes for example Out
of memory situations in a J2EE engine. As the
information about the exceptions is persisted on
the Solution Manager, it is prevented that the
exceptions are overwritten by a round robin log
mechanism or cleanup procedures in the
managed system. Of course not all information is
persisted on the Solution Manager system (only
relevant header data which is needed for error
statistics as type of the exception, description,
severity, time and date,user that caused the
exception,…)
Based on the available statistical information on
exceptions, the main use cases is that you want to
perform an exception trend analysis to find out if
after implementing a special path or update more
or less exceptions are occurring. Another use
case is to ensure that with a specific patch or a
SAP note a problem is fixed and the exception
does not occur anymore.
Figure 10: Overview of exceptions
In the overview (see Figure 10) you can identify in
the time profile on which time of the day which
exceptions occur, or use the history diagram to
see the distribution of exceptions over the
selected timeframe (see Figure 11).
Figure 11: History of exceptions
The drill down allows you to identify the exact
exception which is only summarized in this
diagram. Depending on the exception type more
data is stored in BW to allow an efficient first
analysis, but the complete exception data is
always available in the managed system itself and
can be reached with a jump-in from the Solution
Manager system. In the following example you
can see that 90 dumps have been triggered in the
selected timeframe by a syntax error in a test
program. Using the green jump-on, you could now
navigate to the managed system and analyze the
dump in transaction ST22.
6. Copyright/Trademark
Figure 12: List of dumps
Similar possibilities exist for all kind of supported
exception, like system and application errors in a
J2EE engine with a jump-in to the NetWeaver
Admininistrator.
End-to-End Trace Analysis
The main purpose of E2E Trace Analysis is
investigating problems related to a single user
activity caused by one or more user clicks in the
front end. Single user click in a Web UI or Portal
front end can trigger one or more requests to the
web server. While the web application running on
the web server processes a request, it can trigger
several requests to other components (systems)
in the solution landscape. These components on
their side can trigger additional requests to their
components and so on.
The analysis of performance and functional
problems in such a complex landscape is time
consuming and requires expert knowledge. The
end-to-end cross component tracing aims to
decrease the time for gathering the relevant trace
data related to a single user activity and simplifies
the analysis by guiding the user through the
analysis steps and identifying the component
which consumes the biggest amount of the total
execution time. In addition it is possible to find
exceptions which were triggered by the request in
one of the connected component.
This all is achieved with the help of the SAP
Passport. The passport is a simple extension of
the communication protocols used in SAP
landscape (such as HTTP or RFC) and consists of
two parts. The first one is a GUID which identifies
the requests uniquely, the second one are trace
flags which indicate which trace should be turned
on in one of the involved components.
The passport is injected on client side into the
communication and then later on passed over
from system to system. Each system that receives
a request which contains a passport turns its trace
on dynamically for the processing of this request
(based on the trace flags). When the trace is
written to the disk or the database the GUID of the
passport is included to be able to identify the
request later on. The passport is also injected into
follow up requests when another call to an
external system is made, which then turns on the
trace dynamically, write it to the disk and so on
and so on.
This means actually, that all traces throughout a
landscape can be collected for a single click of an
end-user because they are identified by the GUID
which was generated at client side. Based on the
trace information which is collected it is very easy
to identify which component took which time to
process the request (e.g. processing in java in an
Enterprise Portal, processing in a connected
ABAP system, processing on the DB, in the
network, on the client side,…) and isolate problem
causing components.
But back to the start, the E2E trace is triggered
with the HTTP client plugin for web based
scenarios or directly from the SAPGUI with the
transaction /SDF/E2E_TRACE (starting with SAP
Basis 7.20 SP6 it is also possible to start tracing
during the execution of a dialog transaction using
the function code/ $gui_e2e_trace as in Figure
13). Then the end user performs the activities
which are causing the problem and the whole
tracing process is started in the connected
backend systems.
Figure 13: SAPgui trace enabling
After the trace has been created, SAP Solution
Manager collects all the trace data from all
systems and displays the results broken down per
step in an overview screen (see Figure 14). The
trace overview allows to identify on which part of
the request execution, the most time has been
spent (e.g. client, network, server,…).
Figure 14: Trace Overview
7. Copyright/Trademark
In the given example it can be easily seen, that
most of the time is spent on server side, so the
next natural step is to look deeper in the server
execution and which server instances have been
involved.
As you can see in Figure 15 the execution of a
single request can spawn multiple systems.
Nevertheless it is also obvious that in the scenario
the most time is spent on the ABAP system PR2
(namely nearly 30 seconds).
Figure 15: Server summary
Following the drill down we can see that out of this
30 seconds, 26 are spent on the database, which
hints to a problem with a bad performing DB or a
not tuned SQL statement or the sheer amount of
data which is transferred
Figure 16: Trace time distribution
Now it is possible to analyze on a request level
which traces have been triggered by the request
and by this identify the function module or the
SQL statement which causes this problem.
E2E Trace is a very powerful tool which is
integrated in all SAP infrastructures and also
reused in monitoring & alerting by the End User
Experience Monitoring. This deep insight into a
SAP system gets more and more useful in today’s
highly integrated and heterogeneous landscapes.
AVAILABILITY
End-to-End Root Cause Analysis is available
since SAP Solution Manager 7.0 SP 12 up to the
latest release SAP Solution Manager 7.1.
The main differences between the releases are.
Release prior to SAP Solution Manager 7.0
EhP1 (SPS18)
Own application in SAP Solution Manager
Java WebDynpro
Based on solutions to perform analysis
Focused on ABAP & Java analysis
SAP Solution Manager 7.0 EhP1 (SPS18) and
later
Integrated in SAP Solution Manager
Workcenters
Based on technical systems (no solution
required anymore)
Flexible query based approach (each user
can define his own queries for grouping
systems)
Focuses on all technologies used by SAP &
Partners
SAP Solution Manager 7.1
Focused on SAP & non-SAP
Integration of further system & technology
types
Personalization options for each user
New applications for Host and Database
analysis added
In addition to these major feature deliveries, in
each SP new supported products are added. This
starts with the delivery of new instrumentation and
dashboards for CA Wily Introscope but also
includes functional enhancements in the E2E
applications to support further SAP & non-SAP
products. For a detailed description of the added
products please refer to the later mentioned SAP
Notes.
8. Copyright/Trademark
WHERE TO FIND MORE INFORMATION
If you want to know more, please consider the following resources:
Root Cause Analysis Home in SDN:
http://wiki.sdn.sap.com/wiki/display/TechOps/RCA_Home
Technical Operations in SDN
http://wiki.sdn.sap.com/wiki/display/TechOps
Application Lifecycle Management in general
http://service.sap.com/alm
Relevant SAP Notes:
1010428 – Supported Products with SAP Solution Manager 7.0 (and EhP1)
1293438 – Supported Partner Products
1478974 – Supported Products with SAP Solution Manager 7.1