The document summarizes an IBM conference session about the IBM Performance Optimization Toolkit (IPOT) for identifying performance problems. IPOT integrates with IBM Rational tools to monitor applications during development, testing and production. It collects resource and transaction data to help correlate problems and determine their root cause. The session agenda included an IPOT overview, examples of using it to analyze issues, and a demo.
About Basics of IBM Rational Performance Tester Tool.
It describes what is RPT? how to do a simple script in RPT.
And how to execute it?.
Its a brief idea about RPT
Rational Performance Tester (RPT) is a tool for performance testing web applications. It can simulate thousands of virtual users to test an application's performance and scalability. RPT works with many web technologies and protocols. It allows recording and playback of tests, monitoring of system resources, and real-time reporting of performance metrics. The presentation provided an overview of RPT's features and capabilities. It also included tips and best practices for creating tests, configuring agents and drivers, and optimizing performance.
By default, IBM® Rational® Performance Tester provides essential performance metrics, such as throughput, response times, concurrency, and success rate. However, it also includes several advanced features for detailed analysis, many of which are not commonly used. Proper use of these options provides deeper insight when analyzing test results. This article gives five tips for using some of these advanced features, all of which have helped tremendously in real-world performance testing projects with large companies.
This document provides an overview of performance testing and the Rational Performance Tester tool. It discusses why performance testing is important, different types of performance testing, performance engineering methodology, performance objectives and metrics. It also provides an overview of the Rational Performance Tester tool, describing its test creation, editing, workload scheduling, execution and results evaluation capabilities.
This document describes testing a website called Plants by WebSphere using IBM Rational Performance Tester. It provides details on recording a test, creating datapools, data correlation, generating different reports on test results, organizing the project using folders, and creating performance test schedules. Testing goals include measuring server response times and identifying potential bottlenecks when the site is subjected to high transaction volumes.
Rational Performance Tester is a tool that identifies system performance bottlenecks. It simplifies test creation, load generation, and data collection to help ensure applications can accommodate required user loads. Scripting involves recording user actions and inserting transaction points. Tests are then executed according to schedules that can run scripts across multiple remote machines in parallel to simulate different user loads.
What is UFT? HP's unified functional testing.Confiz
Unified Functional Testing (UFT) is HP's main automated functional testing tool that allows users to integrate QuickTest Pro with Service Test. UFT enables automated functional testing of applications. It provides various panes like the Solution Explorer, Toolbox, Canvas, and Output panes to design and run tests. Users can create simple tests with activities like Replace String and Concatenate String, connect the test steps, and map data from multiple sources to test functionality.
About Basics of IBM Rational Performance Tester Tool.
It describes what is RPT? how to do a simple script in RPT.
And how to execute it?.
Its a brief idea about RPT
Rational Performance Tester (RPT) is a tool for performance testing web applications. It can simulate thousands of virtual users to test an application's performance and scalability. RPT works with many web technologies and protocols. It allows recording and playback of tests, monitoring of system resources, and real-time reporting of performance metrics. The presentation provided an overview of RPT's features and capabilities. It also included tips and best practices for creating tests, configuring agents and drivers, and optimizing performance.
By default, IBM® Rational® Performance Tester provides essential performance metrics, such as throughput, response times, concurrency, and success rate. However, it also includes several advanced features for detailed analysis, many of which are not commonly used. Proper use of these options provides deeper insight when analyzing test results. This article gives five tips for using some of these advanced features, all of which have helped tremendously in real-world performance testing projects with large companies.
This document provides an overview of performance testing and the Rational Performance Tester tool. It discusses why performance testing is important, different types of performance testing, performance engineering methodology, performance objectives and metrics. It also provides an overview of the Rational Performance Tester tool, describing its test creation, editing, workload scheduling, execution and results evaluation capabilities.
This document describes testing a website called Plants by WebSphere using IBM Rational Performance Tester. It provides details on recording a test, creating datapools, data correlation, generating different reports on test results, organizing the project using folders, and creating performance test schedules. Testing goals include measuring server response times and identifying potential bottlenecks when the site is subjected to high transaction volumes.
Rational Performance Tester is a tool that identifies system performance bottlenecks. It simplifies test creation, load generation, and data collection to help ensure applications can accommodate required user loads. Scripting involves recording user actions and inserting transaction points. Tests are then executed according to schedules that can run scripts across multiple remote machines in parallel to simulate different user loads.
What is UFT? HP's unified functional testing.Confiz
Unified Functional Testing (UFT) is HP's main automated functional testing tool that allows users to integrate QuickTest Pro with Service Test. UFT enables automated functional testing of applications. It provides various panes like the Solution Explorer, Toolbox, Canvas, and Output panes to design and run tests. Users can create simple tests with activities like Replace String and Concatenate String, connect the test steps, and map data from multiple sources to test functionality.
The document provides an overview of Quick Test Professional (QTP), a test automation tool. It discusses key aspects of QTP including recording and running tests, using object repositories, checkpoints, parameters, actions, recovery scenarios, and programmatic descriptions.
The document provides an introduction to Oracle Application Testing Suite. It discusses the FMStocks sample application that will be used for testing purposes. It covers various testing concepts such as test planning, requirements, cases, strategies and approaches like functional testing.
The document describes an automation testing framework based on Business Process Testing. Subject matter experts define business processes, components, and tests, while automation engineers define resources, libraries, and recovery scenarios. Together they build, run, and document business process tests without requiring programming knowledge from subject matter experts. The framework uses HP Functional Test (UFT/QTP) and supports Windows XP/Vista/7 and Internet Explorer 7-11. It includes diagrams of the framework and folder structure, and approaches test automation through requirement gathering, test case identification, script development, and reporting.
The challenge for every product is to ship bug-free code as often as possible. Whether you are an early stage startup with a pilot application or a large corporation with myriad services, you’re dealing with this problem every day.
We usually end up with either too little or too much testing and it’s hard to find the sweet spot. Too little testing and you have bugs and application instability, leading to time spent fixing bugs and manually regression testing your apps. You’re asking yourself, “isn’t there an easier way to do this?” Too much testing and you have slow release times and high automation maintenance costs. In this scenario, you’re asking yourself, “are the bugs I’m catching worth the time I’m spending maintaining this code?”
In this webinar, software engineer Kate Green will go over a framework for evaluating your testing situation in order to find your organization’s sweet spot.
Key Takeaways
- Understanding where you are today
- Identifying weak, brittle, or buggy parts of your application
- Figuring out where to test first, and with what types of tests
- How to pare down an excessively large automation suite
Measuring test effectiveness
RFT Tutorial - 9 How To Create A Properties Verification Point In Rft For Tes...Yogindernath Gupta
This tutorial provides steps to create a properties verification point in RFT to test the properties of an object. The verification point creates a baseline of an object's properties during recording and then compares the properties during playback to identify any changes. The steps include starting the recording, selecting the object, choosing to add a properties verification point, setting verification point options like including children properties, adding a name, selecting standard or custom properties, setting retry parameters, and finishing the recording. Optional steps allow editing selected properties to test and using a datapool variable reference instead of a literal value.
This document provides an introduction to using e-Tester's table tests, dialog manager, and authentication manager features. Key points include:
- Table tests allow validating content in HTML tables by checking specific table cells for expected values
- The dialog manager identifies and handles dialog boxes during playback using defined actions
- Actions for dialog boxes can be databanked by enclosing variables in tags
- The authentication manager automates login for sites requiring authentication using configured usernames and passwords
- Multiple records in the dialog manager with the same caption/text will use the first one
- The fatal checkbox in the dialog manager determines if a script should fail if that dialog appears
- The authentication manager settings would need to be updated if the authenticated URL
This document provides an overview of automation fundamentals and an introduction to QuickTest Professional (QTP) 9.2. It discusses test automation concepts, benefits of automation, the automation life cycle, and factors to consider in automation planning. It also covers supported technologies and browsers in QTP, the QTP user interface, recording and running tests, object recognition, synchronization, checkpoints, parameters, and the object repository. The key points covered in 3 sentences are:
Test automation involves automating manual test cases using a tool to shorten testing time and avoid errors; QTP supports testing various application types and stores objects in its repository to recognize and identify them during testing; Parameters, checkpoints, synchronization, and the object repository are important
TestComplete is an automated testing tool for testing Windows, web, and mobile applications. It provides features like test management, test execution for manual and automated tests, reporting, web and load testing, and supports various programming languages. TestComplete compares favorably to HP/Mercury QuickTest Professional with a lower cost, more programming options, and support for additional browsers and platforms, though QTP may be easier for beginners. TestComplete also supports data-driven and keyword-driven testing to parameterize and maintain test cases.
Script Driven Testing using TestCompletesrivinayak
This document discusses the steps to set up a new automated testing project in TestComplete, including selecting a project suite and test project, choosing a scripting language, creating a tested application to record against, recording a new script that performs actions on the application, and parameterizing the script to run with different input values in a data loop for execution. The document then concludes by thanking the reader.
Keyword Driven Testing using TestCompletesrivinayak
This document discusses how to create a keyword-driven testing script in TestComplete. It describes how to add a new keyword type, name the script, record keywords by selecting the tested application, and make a data loop to parameterize the script. The data loop involves selecting an Excel worksheet, specifying the Excel file, selecting the worksheet again, and specifying loop conditions. Finally, it shows how to parameterize the script by linking keywords to cells in the Excel file.
QuickTest Professional 9.2 is an automated testing tool from HP that allows users to record, edit, parameterize, debug and run functional tests on applications. Key features include ease of use, support for various programming languages and technologies, object recognition capabilities, checkpoints for verification, data-driven testing using parameters and recovery scenario management.
Organisations turn to Agile and DevOps to improve customer experience by maximising the speed of delivery without sacrificing quality. As the champions of quality, testers achieve this goal through continuous testing. Test Automation plays a major role in continuous testing; it is the backbone of the continuous test process. To achieve continuous testing, automation must be applied at every stage of the development process. Developing a smart automation strategy and using the right tools is critical in achieving continuous testing since test scripts must be scalable and easy to maintain.
The document compares three testing frameworks: data-driven, keyword-driven, and hybrid. In a data-driven framework, test scripts read data from an Excel file and write results to another Excel file. A keyword-driven framework uses a test steps Excel file to read keywords and write test cases. A hybrid framework combines elements of both data-driven and keyword-driven frameworks by allowing test scripts to read both keywords and data.
The document discusses a test automation framework (TAF) that helps perform automated testing effectively. It has several key features including being keyword-driven, product-independent, tool-independent, and compatible with continuous integration frameworks. The TAF workflow involves initialization, development, usage, and maintenance phases. The TAF architecture consists of test scenarios that run via the TAF core and output results to various formats.
Creating Digital Confidence with Test AutomationSauce Labs
Engineering teams understand the importance of a comprehensive continuous testing strategy to build digital confidence, improve user experience, and accelerate release velocity. However, when beginning on the journey to continuous testing, the task of building and executing a strategy that provides clear value can be challenging. Whether it’s designing automation from scratch, understanding where you can scale the value of your tests throughout the pipeline and across teams, or demonstrating the value that quality brings to larger business objectives, building a test automation strategy sometimes creates a web that is seemingly too complex to untangle.
In this webinar, Yoosuf Maktoum, Senior Manager of Quality Engineering at Sysco Corporation, will share a test automation design and implementation strategy that helped his own team achieve success. Yoosuf will discuss a unique test automation design pattern that both supports and accelerates automation efforts. This framework, leveraging open source technologies, enables reusability, smarter and faster automation, and is suitable in both a DevOps and Agile workflow. He will then demonstrate how this framework can be applied to a single test automation script for functional and nonfunctional tests, test data management, and test environment management, as well as how it can support testing for both legacy and modern applications. Through these methods, his team is able to provide quality as a service across the organization.
Attendees of this session will learn:
- Basic design principles to consider when building out automation, and how open source can augment your strategy
- How to apply a single automation framework across various types of testing (web, mobile, API, and standalone automation)
- Strategies for aligning testing with business objectives to demonstrate value to leadership
SVR Technologies providing the course content of QTP. It was given by our experts to improve the knowledge of the readers which helps you in interview. For more details about other IT courses please visit http://www.svrtechnologies.com/
The PeopleSoft Test Framework (PTF) allows for automating PeopleSoft test automation. PTF scripts can be used to test new implementations, upgrades, and all kinds of PeopleSoft projects. When a PeopleSoft Update Manager (PUM) image update occurs, it can introduce changes that impact existing PTF test scripts. To help maintain PTF scripts during image updates, PTF provides test maintenance and test coverage reports. These reports identify which test scripts and metadata may be impacted by changes in the new image update. Using these reports helps ensure PTF test scripts are updated and maintained to have complete test coverage across image updates.
Katalon Studio integrated with modeling tools like Microsoft Visio, Sparx Sys...TransWare AG
https://youtu.be/78hN6BF0k0U
Integrate Katalon Studio with modeling tools such as Microsoft Visio, Sparx Systems Enterprise Architect or others like ARIS. This approach links BDD with model-based testing to generate test scenarios and test scenarios collections.
This video demos a showcase:
- Visio flowchart diagram of a business process with business activities
- Mock-up web application supporting the business process
- Katalon Studio test case recording and execution on application forms level
- Visio flowchart with added test case information per business activity
- BPM-X to generate Katalon Studio test suites and collections
- Test execution for all end-to-end test cases of the business process
The presented solution is based on the BPM-X enterprise model integration bus.
BPM-X
…is agnostic of tools and modeling languages
…connects existing modeling and testing tools
…automates the generation of test cases and data
…provides orchestration for test automation tools
IBM Performance Optimizaiton Toolkit for Rational Application DeveloperAshish Patel
This document summarizes an IBM conference session about the IBM Performance Optimization Toolkit (IPOT) for optimizing application performance. IPOT allows developers, testers and support teams to monitor applications in real-time, integrate performance data with development tools, and help determine the root cause of performance issues. The session agenda included an IPOT overview, examples of profiling applications and monitoring resources and logs, and a demo.
Innovating the Software Development Process at Cadence Design SystemsRahul Razdan
Cadence Design Systems faced increasing software development challenges due to mergers, complexity, and globalization. They implemented a holistic solution using Rational tools to improve productivity, quality, and predictability across their distributed teams. This involved establishing processes, metrics, and infrastructure. The results after 6 years included increased testing capacity, more projects and sites, and maintaining high customer satisfaction despite changes. Next steps involve expanding the solution to drive product development for Cadence's customers.
The document provides an overview of Quick Test Professional (QTP), a test automation tool. It discusses key aspects of QTP including recording and running tests, using object repositories, checkpoints, parameters, actions, recovery scenarios, and programmatic descriptions.
The document provides an introduction to Oracle Application Testing Suite. It discusses the FMStocks sample application that will be used for testing purposes. It covers various testing concepts such as test planning, requirements, cases, strategies and approaches like functional testing.
The document describes an automation testing framework based on Business Process Testing. Subject matter experts define business processes, components, and tests, while automation engineers define resources, libraries, and recovery scenarios. Together they build, run, and document business process tests without requiring programming knowledge from subject matter experts. The framework uses HP Functional Test (UFT/QTP) and supports Windows XP/Vista/7 and Internet Explorer 7-11. It includes diagrams of the framework and folder structure, and approaches test automation through requirement gathering, test case identification, script development, and reporting.
The challenge for every product is to ship bug-free code as often as possible. Whether you are an early stage startup with a pilot application or a large corporation with myriad services, you’re dealing with this problem every day.
We usually end up with either too little or too much testing and it’s hard to find the sweet spot. Too little testing and you have bugs and application instability, leading to time spent fixing bugs and manually regression testing your apps. You’re asking yourself, “isn’t there an easier way to do this?” Too much testing and you have slow release times and high automation maintenance costs. In this scenario, you’re asking yourself, “are the bugs I’m catching worth the time I’m spending maintaining this code?”
In this webinar, software engineer Kate Green will go over a framework for evaluating your testing situation in order to find your organization’s sweet spot.
Key Takeaways
- Understanding where you are today
- Identifying weak, brittle, or buggy parts of your application
- Figuring out where to test first, and with what types of tests
- How to pare down an excessively large automation suite
Measuring test effectiveness
RFT Tutorial - 9 How To Create A Properties Verification Point In Rft For Tes...Yogindernath Gupta
This tutorial provides steps to create a properties verification point in RFT to test the properties of an object. The verification point creates a baseline of an object's properties during recording and then compares the properties during playback to identify any changes. The steps include starting the recording, selecting the object, choosing to add a properties verification point, setting verification point options like including children properties, adding a name, selecting standard or custom properties, setting retry parameters, and finishing the recording. Optional steps allow editing selected properties to test and using a datapool variable reference instead of a literal value.
This document provides an introduction to using e-Tester's table tests, dialog manager, and authentication manager features. Key points include:
- Table tests allow validating content in HTML tables by checking specific table cells for expected values
- The dialog manager identifies and handles dialog boxes during playback using defined actions
- Actions for dialog boxes can be databanked by enclosing variables in tags
- The authentication manager automates login for sites requiring authentication using configured usernames and passwords
- Multiple records in the dialog manager with the same caption/text will use the first one
- The fatal checkbox in the dialog manager determines if a script should fail if that dialog appears
- The authentication manager settings would need to be updated if the authenticated URL
This document provides an overview of automation fundamentals and an introduction to QuickTest Professional (QTP) 9.2. It discusses test automation concepts, benefits of automation, the automation life cycle, and factors to consider in automation planning. It also covers supported technologies and browsers in QTP, the QTP user interface, recording and running tests, object recognition, synchronization, checkpoints, parameters, and the object repository. The key points covered in 3 sentences are:
Test automation involves automating manual test cases using a tool to shorten testing time and avoid errors; QTP supports testing various application types and stores objects in its repository to recognize and identify them during testing; Parameters, checkpoints, synchronization, and the object repository are important
TestComplete is an automated testing tool for testing Windows, web, and mobile applications. It provides features like test management, test execution for manual and automated tests, reporting, web and load testing, and supports various programming languages. TestComplete compares favorably to HP/Mercury QuickTest Professional with a lower cost, more programming options, and support for additional browsers and platforms, though QTP may be easier for beginners. TestComplete also supports data-driven and keyword-driven testing to parameterize and maintain test cases.
Script Driven Testing using TestCompletesrivinayak
This document discusses the steps to set up a new automated testing project in TestComplete, including selecting a project suite and test project, choosing a scripting language, creating a tested application to record against, recording a new script that performs actions on the application, and parameterizing the script to run with different input values in a data loop for execution. The document then concludes by thanking the reader.
Keyword Driven Testing using TestCompletesrivinayak
This document discusses how to create a keyword-driven testing script in TestComplete. It describes how to add a new keyword type, name the script, record keywords by selecting the tested application, and make a data loop to parameterize the script. The data loop involves selecting an Excel worksheet, specifying the Excel file, selecting the worksheet again, and specifying loop conditions. Finally, it shows how to parameterize the script by linking keywords to cells in the Excel file.
QuickTest Professional 9.2 is an automated testing tool from HP that allows users to record, edit, parameterize, debug and run functional tests on applications. Key features include ease of use, support for various programming languages and technologies, object recognition capabilities, checkpoints for verification, data-driven testing using parameters and recovery scenario management.
Organisations turn to Agile and DevOps to improve customer experience by maximising the speed of delivery without sacrificing quality. As the champions of quality, testers achieve this goal through continuous testing. Test Automation plays a major role in continuous testing; it is the backbone of the continuous test process. To achieve continuous testing, automation must be applied at every stage of the development process. Developing a smart automation strategy and using the right tools is critical in achieving continuous testing since test scripts must be scalable and easy to maintain.
The document compares three testing frameworks: data-driven, keyword-driven, and hybrid. In a data-driven framework, test scripts read data from an Excel file and write results to another Excel file. A keyword-driven framework uses a test steps Excel file to read keywords and write test cases. A hybrid framework combines elements of both data-driven and keyword-driven frameworks by allowing test scripts to read both keywords and data.
The document discusses a test automation framework (TAF) that helps perform automated testing effectively. It has several key features including being keyword-driven, product-independent, tool-independent, and compatible with continuous integration frameworks. The TAF workflow involves initialization, development, usage, and maintenance phases. The TAF architecture consists of test scenarios that run via the TAF core and output results to various formats.
Creating Digital Confidence with Test AutomationSauce Labs
Engineering teams understand the importance of a comprehensive continuous testing strategy to build digital confidence, improve user experience, and accelerate release velocity. However, when beginning on the journey to continuous testing, the task of building and executing a strategy that provides clear value can be challenging. Whether it’s designing automation from scratch, understanding where you can scale the value of your tests throughout the pipeline and across teams, or demonstrating the value that quality brings to larger business objectives, building a test automation strategy sometimes creates a web that is seemingly too complex to untangle.
In this webinar, Yoosuf Maktoum, Senior Manager of Quality Engineering at Sysco Corporation, will share a test automation design and implementation strategy that helped his own team achieve success. Yoosuf will discuss a unique test automation design pattern that both supports and accelerates automation efforts. This framework, leveraging open source technologies, enables reusability, smarter and faster automation, and is suitable in both a DevOps and Agile workflow. He will then demonstrate how this framework can be applied to a single test automation script for functional and nonfunctional tests, test data management, and test environment management, as well as how it can support testing for both legacy and modern applications. Through these methods, his team is able to provide quality as a service across the organization.
Attendees of this session will learn:
- Basic design principles to consider when building out automation, and how open source can augment your strategy
- How to apply a single automation framework across various types of testing (web, mobile, API, and standalone automation)
- Strategies for aligning testing with business objectives to demonstrate value to leadership
SVR Technologies providing the course content of QTP. It was given by our experts to improve the knowledge of the readers which helps you in interview. For more details about other IT courses please visit http://www.svrtechnologies.com/
The PeopleSoft Test Framework (PTF) allows for automating PeopleSoft test automation. PTF scripts can be used to test new implementations, upgrades, and all kinds of PeopleSoft projects. When a PeopleSoft Update Manager (PUM) image update occurs, it can introduce changes that impact existing PTF test scripts. To help maintain PTF scripts during image updates, PTF provides test maintenance and test coverage reports. These reports identify which test scripts and metadata may be impacted by changes in the new image update. Using these reports helps ensure PTF test scripts are updated and maintained to have complete test coverage across image updates.
Katalon Studio integrated with modeling tools like Microsoft Visio, Sparx Sys...TransWare AG
https://youtu.be/78hN6BF0k0U
Integrate Katalon Studio with modeling tools such as Microsoft Visio, Sparx Systems Enterprise Architect or others like ARIS. This approach links BDD with model-based testing to generate test scenarios and test scenarios collections.
This video demos a showcase:
- Visio flowchart diagram of a business process with business activities
- Mock-up web application supporting the business process
- Katalon Studio test case recording and execution on application forms level
- Visio flowchart with added test case information per business activity
- BPM-X to generate Katalon Studio test suites and collections
- Test execution for all end-to-end test cases of the business process
The presented solution is based on the BPM-X enterprise model integration bus.
BPM-X
…is agnostic of tools and modeling languages
…connects existing modeling and testing tools
…automates the generation of test cases and data
…provides orchestration for test automation tools
IBM Performance Optimizaiton Toolkit for Rational Application DeveloperAshish Patel
This document summarizes an IBM conference session about the IBM Performance Optimization Toolkit (IPOT) for optimizing application performance. IPOT allows developers, testers and support teams to monitor applications in real-time, integrate performance data with development tools, and help determine the root cause of performance issues. The session agenda included an IPOT overview, examples of profiling applications and monitoring resources and logs, and a demo.
Innovating the Software Development Process at Cadence Design SystemsRahul Razdan
Cadence Design Systems faced increasing software development challenges due to mergers, complexity, and globalization. They implemented a holistic solution using Rational tools to improve productivity, quality, and predictability across their distributed teams. This involved establishing processes, metrics, and infrastructure. The results after 6 years included increased testing capacity, more projects and sites, and maintaining high customer satisfaction despite changes. Next steps involve expanding the solution to drive product development for Cadence's customers.
RTCp enables collaborative application development on System i. Combine multiple version control systems into one wether it be RPG, COBOL, Java, .NET, or C++. Execute build and promotion from a centralized interface, move to iterative development planning,and keep track of tasks and defects with work item tracking. View the whole project scope from a central dashboard.
IBM's Problem Determination Tools have evolved since their introduction in 2000 to become more robust and functionally superior through ongoing releases. Customers are migrating to the tools due to issues with older products, demands for more sophisticated development and testing tools, and rising maintenance fees for other solutions. The Problem Determination Tools suite features capabilities for supporting SOA/composite applications, optimizing performance, debugging applications, managing and testing data, and conducting various types of testing.
OSA03 Pourquoi choisir IBM pour vos projets BPM ?Nicolas Desachy
Les solutions et les services BPM proposés par IBM contribuent à optimiser les performances métier grâce à des fonctionnalités destinées à identifier, documenter, automatiser et améliorer en continu les processus. Découvrez dans cette session pourquoi des clients avec des applications Oracle ont préféré le middleware IBM pour leurs projets.
RESTful Work Items: Opening up Collaborative ALM (Rational Software Conferen...Steve Speicher
This document summarizes a presentation about RESTful work items and opening up collaborative application lifecycle management (ALM). It discusses the problem of integrating many different ALM tools, proposes using open standards like OSLC to define REST APIs, and demos integrating Tasktop and ClearQuest using OSLC. The presentation outlines the current state of the OSLC Change Management specification, previews upcoming version 2.0, and concludes by discussing next steps for OSLC adoption.
The document discusses IBM Cognos software and its integration and interoperability with SAP applications and Business Warehouse. Key capabilities of Cognos include optimized access to SAP BW through indexing and caching, as well as real-time reporting and planning using TM1. Cognos provides a unified platform for business intelligence, performance management and planning across various data sources.
MuleSoft Surat Virtual Meetup#4 - Anypoint Monitoring and MuleSoft dataloader.ioJitendra Bafna
The document summarizes an agenda for a MuleSoft meetup discussing Anypoint Monitoring, Anypoint Alerts, MuleSoft dataloader.io, and Runtime Manager insights. It provides information on monitoring application and API performance, setting alerts for errors or thresholds, using dataloader.io to import and export data, and gaining visibility into transactions with Runtime Manager insights. It also demonstrates Anypoint Monitoring dashboards and alert configurations.
Broadcast Music Inc - Release Automation Rockstars!ghodgkinson
The document describes Broadcast Music Inc.'s automation of their software release process using IBM Rational tools. It discusses:
1. BMI's goals for automated release management including assembly, deployment, rollback, and redeployment.
2. How different IBM Rational tools like Team Concert, Quality Manager, and Build Forge are used to automate builds, testing, and releases of various BMI systems like WebSphere, Portal, and DataPower.
3. The technical details of setting up automated builds and deployments using Ant scripts for various components, promoting changes between environments, and storing assembled artifacts.
Rational Insight is an enterprise reporting solution from IBM that addresses challenges in reporting across departments and disparate data sources. It provides automated, reliable reporting and dashboards across projects, teams and tools through integration with IBM Collaboration Lifecycle Management tools. Rational Insight leverages the Cognos BI platform and uses an extract, transform, load process to integrate data into a data warehouse for real-time and historical reporting.
Windows 7 – Application Compatibility Toolkit 5.5 OverviewVijay Raj
This slidedeck was used at the BITPro november monthly UG meet. This session gave a detailed explanation of How the ACT 5.5 tool can be used to mitigate the AppCompat issues. Further, an overview of Windows 7 Core OS changes were also discussed.
UiPath Integration with SAP Solution Manager 7.2Diana Gray, MBA
The UiPath integration with SAP Solution Manager enables customers to obtain maximum value from their investments in SAP software by serving as a hub for all test management activities and as the central information resource for all automation and testing processes.
It offers seamless creation of automated test cases across the whole enterprise landscape including SAP and non-SAP applications and provides SAP customers with the ability to execute test cases, exchange the complex test data, and ensure the functionality of Business Process Change Analyzer for automation projects.
Join to see the integration live, ask the questions and benefit from new knowledge how to speed up your testing, increase the test automation rate and shorten your release cycles.
Embedded software validation best practices with NI and RQMPaul Urban
Embedded control software is growing exponentially in mechanical systems, which forces test methods to evolve even faster. This presentation was part of the Rational Quality Manager enlightenment series describing how National Instruments and IBM provide end-to-end traceability and test component reuse for superior system quality and validation by enabling consistent testing, results analysis, and traceability throughout the development process.
The document provides an overview of the Eclipse BIRT (Business Intelligence and Reporting Tools) project. It discusses the goals and timeline of BIRT, new features in release 2.2 including dynamic crosstabs, new chart types, improved data access, and easier application integration. It previews upcoming features in release 2.3 such as a JavaScript debugger and visual SQL editor. The presentation demonstrates BIRT's report designer and capabilities.
The document discusses TIBCO Spotfire, an analytics platform. It shows how Spotfire connects various clients to data sources via servers. It provides visualizations, analytic engines, and automation services. Spotfire Application Data Services connects Spotfire to enterprise systems like SAP, Siebel, and Oracle by introspecting their data models and delivering the data using SQL. The rest of the document focuses on how Spotfire connects specifically to SAP Business Warehouse (BW) data, discussing the challenges of differing data structures and query languages between Spotfire and BW, and how Spotfire's adapter generates optimized queries and allows unified access to BW data in Spotfire.
Recover 30% of your day with IBM Development Tools (Smarter Mainframe Develop...Susan Yoskin
If you need to attract new developers, and want to keep your company’s name out of the headlines, then this session is for you. When your business depends on your mainframe apps working and performing well—all the time—you need to be alerted to issues as they occur and have the tools to help you find and fix the problems and test your solutions before disaster strikes (we’ve all been in those late night and weekend drills). You also need to continue supporting these applications for years to come, and that will require new talent.
This session will introduce you to the development environments that college grads are already comfortable with, and help your applications become more resilient at the same time. We’ll walk you through the tools to help you accomplish all of this and demo some scenarios to show you how efficiently our tools can perform the tasks that slow you down.
The document discusses IBM's Jazz platform and Rational Team Concert for enabling software delivery in the style of Web 2.0. Rational Team Concert provides capabilities like collaboration, process automation, visibility into project status, and traceability across the development lifecycle. It leverages technologies like Eclipse, supports agile practices, and provides a rich web client for external stakeholders.
This webinar covered the highlights of camunda BPM 7.0, including the new camunda cockpit for process monitoring and operations. Key highlights of 7.0 included improved runtime container integration, clustering support, and a rewrite of the process engine history using an event-based audit log. The webinar also discussed camunda's productization and support roadmap, including the differences between the community and enterprise editions and various support service level agreements. Planned features for the 7.1 release include improvements to the camunda modeler, additional application server integrations, and enhancements to the cockpit and tasklist.
Delivering New Visibility and Analytics for IT OperationsGabrielle Knowles
The document discusses how Splunk provides visibility and analytics for IT operations. It outlines Splunk's ability to ingest data from various sources like applications, databases, networks and more. This gives organizations a universal platform to gain operational visibility, enable proactive monitoring, and obtain business insights from their machine data in real-time. Splunk differentiators include analyzing all data, scaling for large environments, and reducing MTTR, costs and improving user experiences.
Similar to IBM Performance Optimizaiton Toolkit for Rational Performance Tester (20)
In this session we will explore how Cloud Native technologies require us to re-think the way businesses create and scale modern digital solutions. We will explore the trends that are driving the adoption of these technologies and the key use cases for their application. Most importantly, we will uncover the business problems that these technologies are most effective at solving. While many tools exist for Containers, Microservices Architecture, DevOps, and Continuous Delivery processes involved in Cloud Native development, we aim to provide best practices and guidance on how to approach these business problems when solutioning using the Microsoft Azure platform.
American Marketing Association, Legendary Leadership Series: Think like a sof...Ashish Patel
Software has been eating the world for more than a decade.
And it has been transforming new business models through platforms and ecosystems that leverage data
It’s important to think deeply about what your company does today, what is its mission?
• Are you a car maker? Are you a service provider of financial services?
• Now I challenge you to re-think that.
o If you are a car maker today; what are the possibilities if you thought of yourself as a software company that happens to make cars?
o Or a software company that happens to offer financial services?
We will explore how to Think like a Software Company on October 16th, see you there!
Join Tony Chapman and I as we host the legendary leadership series
Digital Transformation: Embracing a Growth MindsetAshish Patel
Transformation is driving innovation in mindset, business processes and models along with the associated technology to support initiatives. The typical "we've always done it this way“ approach simply no longer cuts it in the increasingly competitive digital age. The best run companies are aware of this and leverage old and new technology to create innovative products and services, gain competitive advantage and enhance customer interaction, all while ultimately improving the bottom line. However, with a reported 80% of IT budgets being spent to maintain existing legacy systems - leaving little to no money for new technologies - it leaves IT and Line of Business executives with a conundrum of introducing new systems without disrupting existing, trusted legacies... so how do we make a digital transformation successful? Come learn from one in flight.
Can your business survive the next disaster?Ashish Patel
Did you know that 40% of businesses do not re-open after a disaster? Or that it could cost an organization up to $600,000 per hour during a disaster scenario? In today’s “always on” world, businesses must continue to operate no matter what, which means that critical IT infrastructure must be available 24/7/365. In this session we will learn more about a holistic approach towards business continuity & IT resiliency and how organizations can achieve high levels of availability. We will also go over each stage of the business continuity lifecycle and talk about the importance of managed services, key processes and technologies that must be considered for a comprehensive Business Continuity & Resiliency plan.
Where in the world is your Corporate data?Ashish Patel
Your employees – and your company data – are on the go every day. As a result, your employees are relying on the use of 3rd party online services without IT approval – that is Shadow IT in your own organization. That’s some risky business. Where in the world is your Corporate Data?
With TeraGo Cloud Drive we are giving you back control of your most valuable asset, your data.
In this webinar you will learn about:
How Shadow IT is picking up velocity due to the accessibility and ease of cloud applications
Consequences of weak corporate security mechanisms
How to give your IT department control of your data and its’ security
This document discusses DevOps and its challenges in the enterprise. It identifies 5 common pitfalls that enterprises face when adopting DevOps: 1) lack of understanding of DevOps terminology, 2) balancing development and operations interests and accountability, 3) establishing the correct culture, 4) finding champions for buy-in, and 5) justifying DevOps to the business. It then provides recommendations for addressing these challenges, such as focusing on customer experience, using cloud services to improve processes, and establishing metrics to measure DevOps success.
IBM Cloud OpenStack Services provides a managed private cloud built on OpenStack that offers flexibility, scalability, and security. Key benefits include predictable pricing with monthly subscriptions to scale resources up or down, as well as dedicated infrastructure to avoid noisy neighbors. IBM manages the OpenStack management systems, network gateways, compute, and storage hardware to deliver a turnkey private cloud solution.
IBM Corporate Services Corps - Experience in MalaysiaAshish Patel
The document summarizes IBM's Corporate Service Corps program, which sends IBM employees to work on projects in developing countries similar to the Peace Corps. It describes a team of IBMers who worked in Malaysia on projects with two organizations: the Spastic Children's Association of Johor and the Handicapped and Mentally Disabled Children's Association Johor. The team helped develop strategies for improving computer education and marketing/fundraising capabilities at the respective organizations over the course of 4 weeks.
This document discusses security challenges and solutions related to cloud computing. It begins by outlining common business and IT challenges, then defines cloud computing and reviews security concerns such as data privacy, reliability, and loss of control. The document proposes that identity and access management, data security, and regulatory compliance are top security risks for cloud computing. It presents IBM solutions for privileged user access control, identity federation, and application isolation that aim to address these risks.
Application Response Measurement (ARM) based Monitoring for EclipseAshish Patel
This document discusses ARM-based performance monitoring for the Eclipse platform. It provides an overview of Eclipse and the Test and Performance Tools Project (TPTP). It describes how Application Response Measurement (ARM) is used to measure transaction response times across distributed systems. The architecture inserts ARM instrumentation into applications using bytecode instrumentation or aspects. A demonstration is provided and future enhancements are discussed, such as supporting more application types and platforms. Instructions for getting started with the ARM monitoring capabilities in Eclipse are also included.
Using and Extending the Eclipse Test and Performance Tools Platform (TPTP) fo...Ashish Patel
The document discusses using the Eclipse Test and Performance Tools Platform (TPTP) for data collection in self-healing systems. TPTP provides a framework and tools for collecting log and trace data from different systems through common interfaces. It defines common data models and agents that can collect log, trace, and statistical data. The collected data is normalized and can then be analyzed to help identify problems and enable self-healing capabilities through correlation of events.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
AI-Powered Food Delivery Transforming App Development in Saudi Arabia.pdfTechgropse Pvt.Ltd.
In this blog post, we'll delve into the intersection of AI and app development in Saudi Arabia, focusing on the food delivery sector. We'll explore how AI is revolutionizing the way Saudi consumers order food, how restaurants manage their operations, and how delivery partners navigate the bustling streets of cities like Riyadh, Jeddah, and Dammam. Through real-world case studies, we'll showcase how leading Saudi food delivery apps are leveraging AI to redefine convenience, personalization, and efficiency.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
2. IBM Rational Software Development Conference 2006
Session: SQ11
Agenda
IBM Performance Optimization Toolkit (IPOT) overview
IT Lifecycle Management
Problem Determination by Example
Simple Performance Test
Resource Monitoring during Performance Testing
Root Cause Analysis using Application Trace data
Demo (30 min)
Online References
Q&As (10 min)
3. IBM Rational Software Development Conference 2006
Session: SQ11
IBM Performance Optimization Toolkit Overview
Who are the IPOT users?
Any developer who wants to identify root cause of
performance problems and accelerate problem determination
occurring in development environment.
Any tester / performance tester who wants to identify root
cause of performance problems and accelerate problem
determination occurring in test environment.
Any support developer who wants to identify root cause of
performance problems and accelerate problem determination
occurring in production environment.
4. IBM Rational Software Development Conference 2006
Session: SQ11
IBM Performance Optimization Toolkit Overview
Why would they use IPOT?
IPOT provides transaction decomposition for application
optimization
Allows the developer/tester to monitor a distributed application in
real-time
Provides a Data Collection Infrastructure (DCI) for real-time
monitoring
Integrates with Rational Software Development Platform
IPOT accelerates problem determination by correlating the
different data collection views (logging, performance and
resource monitoring data) by time.
IPOT allows to import data from production environment and
visualize it for analysis and correlation in the development/test
environments.
5. IBM Rational Software Development Conference 2006
Session: SQ11
IT Lifecycle Management
ITBusiness
Application
Development
Problem
Determination
Development
Team
Deployment2
1
3
Operations
Team
6. IBM Rational Software Development Conference 2006
Session: SQ11
Problem Determination by Example
An example: Plants By WebSphere
Application ServerWeb Server Database Server
Application Server
Data Collection
Infrastructure
IPOT Agent
ITM Agent ITM Agent
Steps to collect resource data
Steps to collect application performance data
ITM Agent
10. IBM Rational Software Development Conference 2006
Session: SQ11
Overlay resource counters with the test results
Resource counters are statistical data that change over time
Data is collected from statistical resource data collecting agents,
such as ITM agents, Windows Performance Monitor, Rstatd
Correlate the resource data with the results to identify the
location where performance degradation might have occurred
Problem determination in real-time
Steps to Problem Determination
18. IBM Rational Software Development Conference 2006
Session: SQ11
Architecture: Statistical Data
Target System
IBM Tivoli Monitoring
Data Warehouse
Management Server
SOAPServer
Presentation System
IPOT
Web Service Client
RPT
Eclipse Platform
ITM Agent
(Linux OS)
ITM Agent
(Windows OS)
ITM Agent
(Unix OS)
ITM Agent
(DB2)
…
Internet
(HTTP/HTTPs)
19. IBM Rational Software Development Conference 2006
Session: SQ11
Root cause analysis using application trace data
Performance metrics measured during tracing helps to drill
down to possible causes of performance degradation
Real-time data collection using the Data Collection
Infrastructure (DCI) for ARM-instrumented applications
Application Response Measurement (ARM) is an open
standard.
Steps to Problem Determination
27. IBM Rational Software Development Conference 2006
Session: SQ11
Architecture: Performance Data
Target SystemPresentation System
IPOT
RPT
Eclipse Platform
Performance Schedule
Performance Test
AgentController
AgentController
HTTP Request
(HTTP Header includes
ARM_CORRELATOR)
Data Collection
Infrastructure
Application Server
Application
IPOT Agent
28. IBM Rational Software Development Conference 2006
Session: SQ11
Analyze applications deployed to a pre-production or
production environment using Tivoli products
Provides a developer a realistic view of the events in a
production environment for root cause analysis
Import performance data from IBM Tivoli Composite
Application Manager (ITCAM) products after the application
has executed
Import statistical data from IBM Tivoli Monitoring (ITM) after the
application has executed
Steps to Problem Determination
29. IBM Rational Software Development Conference 2006
Session: SQ11
Import Performance Data
30. IBM Rational Software Development Conference 2006
Session: SQ11
Import Performance Data
31. IBM Rational Software Development Conference 2006
Session: SQ11
Import Performance Data
32. IBM Rational Software Development Conference 2006
Session: SQ11
Import Performance Data
33. IBM Rational Software Development Conference 2006
Session: SQ11
Import Performance Data
34. IBM Rational Software Development Conference 2006
Session: SQ11
Import Performance Data
36. IBM Rational Software Development Conference 2006
Session: SQ11
Architecture: Import Performance Data
Target System
IBM Tivoli Composite Application
Manager
Data Warehouse
Management Server
WebServices
Presentation System
IPOT
Web Service Client
RPT
Eclipse Platform
Management
Agent
Management
Agent
Management
Agent
Management
Agent
…
Internet
(HTTP/HTTPs)
37. IBM Rational Software Development Conference 2006
Session: SQ11
Import Statistical Data
38. IBM Rational Software Development Conference 2006
Session: SQ11
Import Statistical Data
39. IBM Rational Software Development Conference 2006
Session: SQ11
Import Statistical Data
40. IBM Rational Software Development Conference 2006
Session: SQ11
Import Statistical Data
41. IBM Rational Software Development Conference 2006
Session: SQ11
Import Statistical Data
47. IBM Rational Software Development Conference 2006
Session: SQ11
Eric Labadie
Ashish Patel
http://www-128.ibm.com/developerworks/rational/library/05/523_perf/
Thank You
Editor's Notes
Please note that if you simply apply this template to your existing presentation, you risk not updating the notes and handouts masters of your presentation. Please follow the steps below to copy your existing slides into the new template.
1. Download the new template to your hardrive
2. Open existing presentation that needs to be updated with the new template
3. Go to slide sorter view from the "View" menu
4. Press <Control> "A" to select all slides in this view
5. Press <Control> "C" to copy all slides in this view
6. Select "Open" from the File menu and open the new template
7. Go to slide sorter view from the "View" menu
8. Press <Control> "V" to paste the slides
9. Select slides 1 and 2 which are no longer needed in your presentation and press <Control> "X" to cut
10. Select “Save As” from the File menu and rename the file
Reformatting Issues:
Replace Fonts (especially Times New Roman)
Due to a PowerPoint limitation, it’s advisable to run the “replace fonts” option after applying the template
Select Format/Replace Fonts and select desired font to be changed (Times New Roman and Arial are commonly seen here to be replaced with Arial Narrow)
Correct Colors - graphic objects and text created not using the auto layout features may not automatically convert. Some reformatting may be necessary.
If slide background colors are incorrect, reset slide background color to autocolor
If graphics are colored incorrectly, use the colors that are built-in to the template already as these are pre-approved colors
Slide Layouts
It may be necessary to reapply slide layouts to problem slides. To do this, from the View menu, make sure “Task Pane” is selected. Select “Slide Layout” from the Task Pane and select the desired layout to reapply. Sometimes this action needs to be applied twice in order for layout to readjust.
i/t has to be ondemand just as much as the business has to be ondemand w/ their clients => IBM has to respond to that
- Optimize application before deploying to prevent problems before it happens in the first place
Reduce business downtimewhile accelerating business value throughput…
Quickly discover and understand application-level errors even after deployment
Speed Tivoli-aware application fix and (re)build
Optimize and accelerate (re)deployment
…by bridging developmentand operations teams
Example: Trying to run a test to identify if performance problems exist in a multi-user environment.
Diagram: Can be distributed on separate machines.
Workbench
collect performance and resource monitoring data
import data from historical systems
Data Collection Infrastructure (DCI)
Data Collection Agent
IBM Tivoli Data Collection
IBM Remote Agent Controller
IPOT Agent is reported the ARM Events. ARM Events are collected and organized into a transactional hierarchy, from the root transaction to all of its sub-transactions. This hierarchy is then converted into TPTP Trace events and sent to the Presentation System.
Need DCI installed on the Presentation System if trying to Profile J2EE Performance Metrics while executing a Performance Schedule or Test.
Resource Monitoring
We provide data collection (via the RAC) from Windows/Linux machine, JBoss/JOnas
IPOT adds value by having the ITM infrastructure in place because they support a wide array of ITM agents
Here is what you can do today with RPT….
Today there is a need to look into a specific page and how it’s page elements compare. Viewing performance and statistical data for a particular page helps to isolate the exact location of the problem.
Steps to determine the cause of the problem:
Overlay resource counters with the test results.
- Resource counters are statistical data that change over time
- Data is collected from statistical resource data collecting agents, such as ITM agents
Correlate the resource data with the results to identify the location where performance degradation might have occurred.
- Analysis method can be used by overlaying the resource counters selected for real-time data collection during the execution of the test/schedule
Now, let’s try to monitor resource counters across multiple systems to understand the SUT behavior.
We can import and monitor resource monitoring data from multiple sources…
In this example, we will be monitoring counters which are monitored by ITM (IBM Tivoli Monitoring).
By clicking the Run button we start monitoring the selected counters…
As the test got executed, we now see the RPT performance report and the resource monitoring counters.
Part of the RPT reports we have Response vs Time reports
The user can drag and drop counters on top of the existing reports to customize them.
Other useful counters to analyze, other than processor time, are memory usage, disk activity, network activity.
The combination of these counters will help to better understand if there is a lack of resources or if there actually is an application-related problem.
This is the oversall architecture for pulling resource monitoring data from ITM
Steps to determine the cause of the problem:
using performance data
Collect performance data using the DCI
Performance data allows the tester and developer to peer into the behaviour of the application or service at a programmatic level (ie. The method level)
This approach provides solid evidence of where the potential problem is located and increases efficiency in problem determination between the tester and developer, thereby, effectively reducing the time to identify and diagnose performance problems.
Real-time data collection using the Data Collection Infrastructure (DCI) for ARM-instrumented applications
Application Response Measurement (ARM) is an open standard from OpenGroup
Now that we know we have a performance problem with the test case, let’s enable ARM monitoring for the transaction decomposition performance…
…and enable it as well in the performance schedule.
And now launch the test schedule in transaction monitoring mode.
Now, the user gets the UML sequence diagram which shows the transaction decomposition associated with the test cases inside the test schedule.
End to end transaction UML view with a performance problem for the problematic transaction. On the scale on the right, the user can see the different shades of red beside each method calls. The darker red on the scale, the more time is spent in this location in the problematic transaction. The user can then jump to the source code from this view.
The user can the switch to the Method details views to view as well the time spent in the descendant methods called from this transaction. The user can then jump to the source code from this view.
As well, we provide method statistics views showing to the user how many times a methods was called and what is the average time spent in this method. The user can then jump to the source code from this view.
Finally, the user can go to source code from the previous views to identify performance problem
Workbench
collect performance and resource monitoring data
import data from historical systems
Data Collection Infrastructure (DCI)
Data Collection Agent
IBM Tivoli Data Collection
IBM Remote Agent Controller
Workflow:
The RPT client is the edge of the transaction (where ARM transactions are first generated). Therefore, all page are transacted from the presentation system, which behaves similar to a browser, by placing HTTP requests for all page elements (belonging to their respective pages).
For all HTTP Requests, RPT adds the ARM_CORRELATOR header attribute to the request.
Multiple RPT clients can generate the same load for the same transactions and they will be collected by the ARM engine independently.
Anything downstream from the RPT client (for example a webserver or J2EE appserver) must be instrumented with J2EE Monitoring Component (or Tivoli Data Collection, which consists of probes or hooks) that knows how to detect the ARM_CORRELATOR header attribute and then make the appropriate ARM calls to the ARM engine. Once the probe makes the ARM call, the transactions are all treated the same by the ARM engine.
In order to see “into” the application (at the method level) when the RPT test/schedule is executed, the Execution Environments involved muse be instrumented, just the same as one would do with TMTP or the IPOT DCI, so that the RPT HTTP Requests can be correlated with the AppServer’s behavious.
Caspian: Architecture will change so that IPOT is not dependent on RPT, however, RPT becomes dependent on IPOT. Allowing other products to leverage the toolkit.
IPOT Agent is reported the ARM Events. ARM Events are collected and organized into a transactional hierarchy, from the root transaction to all of its sub-transactions. This hierarchy is then converted into TPTP Trace events and sent to the Presentation System.
Need DCI installed on the Presentation System if trying to Profile J2EE Performance Metrics while executing a Performance Schedule or Test.
Steps to determine the cause of the problem:
using performance data
Collect performance data using the DCI
Performance data allows the tester and developer to peer into the behaviour of the application or service at a programmatic level (ie. The method level)
This approach provides solid evidence of where the potential problem is located and increases efficiency in problem determination between the tester and developer, thereby, effectively reducing the time to identify and diagnose performance problems.
Real-time data collection using the Data Collection Infrastructure (DCI) for ARM-instrumented applications
Application Response Measurement (ARM) is an open standard from OpenGroup
In a large scale environment, we use TMTP, TCAMfWebsphere and TCAMfRTT to monitor the transactions. Now, let’s import data from these Tivoli products and analyze it in the RPT workbench.
The user needs to specify the user id and password to pull the data from the Tivoli Management Server.
And then specify when the failure occurred on the system.
Then the user picks up the TCAM/TMTP policy which triggered the error.
Then, the user select the server where the transaction was initiated from.
Then the user picks up the transaction instance that violated the policy.
The user can now investigate the transaction performance problem inside the RPT tooling.
End to end transaction UML view with a performance problem for the problematic transaction. On the scale on the right, the user can see the different shades of red beside each method calls. The darker red on the scale, the more time is spent in this location in the problematic transaction. The user can then jump to the source code from this view.
This is the architecture overview for pulling the data from the TCAM products.
The user may also import historical resource monitoring data from ITM.
The user specifies the userid and password to access ITM server.
The user then specifies the time range for the data to import.
The user then picks up the machines…
and resource counters to import data for.
Then the user can look at the imported data inside the RPT workbench.
Here is the list of IBM products that IPOT can pull data from…