This document discusses performance assurance for packaged applications like Oracle Enterprise Performance Management. It outlines key steps for performance assurance including defining requirements, designing for best practices, verifying performance during development, testing, and monitoring production. Performance testing is recommended to mitigate risks, though it requires realistic loads and careful scripting. A top-down approach is advocated for performance troubleshooting, examining hardware, configuration, design and logs before suspecting product issues. Examples of common performance problems and their solutions are also provided.
Load Testing Best Practices: Application complexity is increasing, yet the stringent requirements for web performance is increasing exponentially. Learn more about the three major types of load testing, determine which you need and how to conduct them.
The most important aspect to release any product or application in the market is to deliver a satisfying user experience. And this can only be achieved when the application performs impeccably. To help understand the ways and means to ensure the same, this PPT will shed light on the essential elements under performance testing. To know more on software performance testing, performance testing, app performance testing, web performance testing, website load testing and performance tuning, go through this presentation and gear up for the upcoming ones.
Load and Performance Testing for J2EE - Testing, monitoring and reporting usi...Alexandru Ersenie
A presentation of how load and performance testing can be done in the J2EE world using open source tools
You will find things like Performance Basics (scope, metrics, factors on performance, generating load, performance reports), Monitoring (Monitoring types, active and reactive monitoring, CPU, Garbage Collection monitoring, Heap and other monitoring) and Tools (open source tools for monitoring, reporting and analysing)
Holiday Readiness: Best Practices for Successful Holiday Readiness TestingApica
Best Practices for Successful Holiday Readiness Testing: Are you already thinking of, and planning for Black Friday? Learn which load tests to use and why to load test early and often so that you are prepared for the holidays.
Performance Testing And Its Type | Benefits Of Performance TestingKostCare
Performance testing is in general, a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Load Testing Best Practices: Application complexity is increasing, yet the stringent requirements for web performance is increasing exponentially. Learn more about the three major types of load testing, determine which you need and how to conduct them.
The most important aspect to release any product or application in the market is to deliver a satisfying user experience. And this can only be achieved when the application performs impeccably. To help understand the ways and means to ensure the same, this PPT will shed light on the essential elements under performance testing. To know more on software performance testing, performance testing, app performance testing, web performance testing, website load testing and performance tuning, go through this presentation and gear up for the upcoming ones.
Load and Performance Testing for J2EE - Testing, monitoring and reporting usi...Alexandru Ersenie
A presentation of how load and performance testing can be done in the J2EE world using open source tools
You will find things like Performance Basics (scope, metrics, factors on performance, generating load, performance reports), Monitoring (Monitoring types, active and reactive monitoring, CPU, Garbage Collection monitoring, Heap and other monitoring) and Tools (open source tools for monitoring, reporting and analysing)
Holiday Readiness: Best Practices for Successful Holiday Readiness TestingApica
Best Practices for Successful Holiday Readiness Testing: Are you already thinking of, and planning for Black Friday? Learn which load tests to use and why to load test early and often so that you are prepared for the holidays.
Performance Testing And Its Type | Benefits Of Performance TestingKostCare
Performance testing is in general, a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Q-Track is an Enterprise Task Management solution which helps you to collaborate within organizations effectively and execute tasks on time and pro-actively. It helps managers to have track on delegated tasks and help their sub-ordinates to close them on exceptions. The solution has seamless integration with MS outlook and MS Projects.
Smart Manufacturing is being implemented within the
Industrial group on a global scale within Sonoco Products.
Stone Technologies and Sonoco Products will explain how
FactoryTalk® Metrics is being used to achieve world class
performance and internal corporate production goals. This solution
has been rolled out to over 30 plants and is expected to be rolled
out on a global level. Stone Technologies and Sonoco Products
will discuss their experience with integration of production data
into their Oracle solution at the corporate level. The automated
production data is being pushed to a central data warehouse where
corporate IT personnel have created a powerful analytics layer
providing valuable production data across the enterprise.
GLOC 2018: Automation or How We Eliminated Manual EBS R12.2 Upgrades and Beca...ennVee TechnoGroup Inc
ennVee's presentation from the 2018 Great Lakes Oracle Conference in Cleveland, Ohio. Session hosted by Joe Bong (Vice President) and Veera Venugopal (Head of Delivery). Topics include automation best practices for upgrading to Oracle E-Business Suite (EBS) R12.2, and the "Voice of the Customer"; a collection of hundreds of survey responses from IT leaders that have or plan to upgrade to R12.2, top challenges, objectives, and timelines, etc.
In today's world its critical to have visibility on every task that is delegated to another employee and it more important to collaborate effectively to achieve the common goal of an organization. Q-Track task management software helps you to achieve the same.
SDLC
PDLC
Software Development Life Cycle
Program Development Life Cycle
Iterative model
Advantages of Iterative model
Disadvantages of Iterative model
When to use iterative model
Spiral Model
Advantages of Spiral model
Disadvantages of Spiral model
When to use Spiral model
Role of Management in Software Development
Today’s complex products, especially IoT-enabled devices, increasingly require the integration of hardware and software development processes. The parallel development of hardware, software and service components demands the convergence of Application and Product Lifecycle Management software platforms.
Watch this webinar recording to learn more about integrating ALM and PLM in the development of complex products. This video also showcases codeBeamer’s approach to this challenge: integrating data and processes via Business Process Management. Watch the recording to learn how codeBeamer enables you to manage the entire product (hardware and software) development lifecycle from a single platform.
Isha Training Solutions Presents "Performance Engineering" course.
For Course content and other information, pls follow below link
http://ishatrainingsolutions.org/performance-engineering/
Live project support is provided on any performance testing tools and also any protocols under the roof -- Call me or Whatsapp me on +91-8019952427
-----------------------------------------------------------------------------------------------------------------------------------
Other Courses Offered by ISHA
1) Performance Engineering Course
http://ishatrainingsolutions.org/performance-engineering/
2) Cloud Performance Engineering in DevOps - Core to Master Level http://ishatrainingsolutions.org/cloud-performance-engineering-devops-the-complete-course/
3) AppDynamics
http://ishatrainingsolutions.org/app-dynamics/
4) Dynatrace
http://ishatrainingsolutions.org/dynatrace-training/
5) Jmeter Core to Master Level
http://ishatrainingsolutions.org/jmeter-core-to-master-level-course/
6) Performance Testing using LoadRunner
http://ishatrainingsolutions.org/microfocus-loadrunner/
7) Advanced LoadRunner
http://ishatrainingsolutions.org/advanced-scripting/
8) Web Services Performance Testing using LoadRunner http://ishatrainingsolutions.org/performance-testing-of-webservices-using-loadrunner-recorded-videos/
9) SAPGUI protocol - Performance Testing for SAP applications Using LoadRunner http://ishatrainingsolutions.org/loadrunner-sap-web-protocol/
10) TruClient Protocol Using LoadRunner
http://ishatrainingsolutions.org/true-client-protocol/
11) Mobile Performance Testing using LoadRunner and JMeter http://ishatrainingsolutions.org/mobile-performance-testing-using-loadrunner/
12) Performance Testing using NeoLoad
http://ishatrainingsolutions.org/performance-testing-using-neoload/
13) Splunk
http://ishatrainingsolutions.org/splunk-training/
14)Selenium
http://ishatrainingsolutions.org/2792-2/
*********************************
For further details, pls contact me.
Contact : Kumar Gupta
Call : +91-8019952427/
Whatsapp : +91-8019952427
kgupta.testingtraining@gmail.com
10 Ways to Better Application-Centric Service ManagementLinh Nguyen
Many IT organizations suffer from the nagging problems of Availability and Performance Management. In this presentation we will detail 10 Ways to Better Application-Centric Service Management, particularly with SAP environments.
July webinar l How to Handle the Holiday Retail Rush with Agile Performance T...Apica
In this Q&A-style webinar, you'll learn:
1. How and why to load test at least three months prior to the holidays
2. How to integrate CI/CD into your holiday load testing
3. How to determine and evaluate load curves
How to implement an enterprise system. Tips from my experience, touching: Project Management, Preparation, Build the customer specific implementation, Prepare roll-out, Deployment & Support
In this presentation which was delivered to testers in Manchester, I help would-be performance testers to get started in performance testing. Drawing on my experiences as a performance tester and test manager, I explain the principles of performance testing and highlight some of the pitfalls.
Q-Track is an Enterprise Task Management solution which helps you to collaborate within organizations effectively and execute tasks on time and pro-actively. It helps managers to have track on delegated tasks and help their sub-ordinates to close them on exceptions. The solution has seamless integration with MS outlook and MS Projects.
Smart Manufacturing is being implemented within the
Industrial group on a global scale within Sonoco Products.
Stone Technologies and Sonoco Products will explain how
FactoryTalk® Metrics is being used to achieve world class
performance and internal corporate production goals. This solution
has been rolled out to over 30 plants and is expected to be rolled
out on a global level. Stone Technologies and Sonoco Products
will discuss their experience with integration of production data
into their Oracle solution at the corporate level. The automated
production data is being pushed to a central data warehouse where
corporate IT personnel have created a powerful analytics layer
providing valuable production data across the enterprise.
GLOC 2018: Automation or How We Eliminated Manual EBS R12.2 Upgrades and Beca...ennVee TechnoGroup Inc
ennVee's presentation from the 2018 Great Lakes Oracle Conference in Cleveland, Ohio. Session hosted by Joe Bong (Vice President) and Veera Venugopal (Head of Delivery). Topics include automation best practices for upgrading to Oracle E-Business Suite (EBS) R12.2, and the "Voice of the Customer"; a collection of hundreds of survey responses from IT leaders that have or plan to upgrade to R12.2, top challenges, objectives, and timelines, etc.
In today's world its critical to have visibility on every task that is delegated to another employee and it more important to collaborate effectively to achieve the common goal of an organization. Q-Track task management software helps you to achieve the same.
SDLC
PDLC
Software Development Life Cycle
Program Development Life Cycle
Iterative model
Advantages of Iterative model
Disadvantages of Iterative model
When to use iterative model
Spiral Model
Advantages of Spiral model
Disadvantages of Spiral model
When to use Spiral model
Role of Management in Software Development
Today’s complex products, especially IoT-enabled devices, increasingly require the integration of hardware and software development processes. The parallel development of hardware, software and service components demands the convergence of Application and Product Lifecycle Management software platforms.
Watch this webinar recording to learn more about integrating ALM and PLM in the development of complex products. This video also showcases codeBeamer’s approach to this challenge: integrating data and processes via Business Process Management. Watch the recording to learn how codeBeamer enables you to manage the entire product (hardware and software) development lifecycle from a single platform.
Isha Training Solutions Presents "Performance Engineering" course.
For Course content and other information, pls follow below link
http://ishatrainingsolutions.org/performance-engineering/
Live project support is provided on any performance testing tools and also any protocols under the roof -- Call me or Whatsapp me on +91-8019952427
-----------------------------------------------------------------------------------------------------------------------------------
Other Courses Offered by ISHA
1) Performance Engineering Course
http://ishatrainingsolutions.org/performance-engineering/
2) Cloud Performance Engineering in DevOps - Core to Master Level http://ishatrainingsolutions.org/cloud-performance-engineering-devops-the-complete-course/
3) AppDynamics
http://ishatrainingsolutions.org/app-dynamics/
4) Dynatrace
http://ishatrainingsolutions.org/dynatrace-training/
5) Jmeter Core to Master Level
http://ishatrainingsolutions.org/jmeter-core-to-master-level-course/
6) Performance Testing using LoadRunner
http://ishatrainingsolutions.org/microfocus-loadrunner/
7) Advanced LoadRunner
http://ishatrainingsolutions.org/advanced-scripting/
8) Web Services Performance Testing using LoadRunner http://ishatrainingsolutions.org/performance-testing-of-webservices-using-loadrunner-recorded-videos/
9) SAPGUI protocol - Performance Testing for SAP applications Using LoadRunner http://ishatrainingsolutions.org/loadrunner-sap-web-protocol/
10) TruClient Protocol Using LoadRunner
http://ishatrainingsolutions.org/true-client-protocol/
11) Mobile Performance Testing using LoadRunner and JMeter http://ishatrainingsolutions.org/mobile-performance-testing-using-loadrunner/
12) Performance Testing using NeoLoad
http://ishatrainingsolutions.org/performance-testing-using-neoload/
13) Splunk
http://ishatrainingsolutions.org/splunk-training/
14)Selenium
http://ishatrainingsolutions.org/2792-2/
*********************************
For further details, pls contact me.
Contact : Kumar Gupta
Call : +91-8019952427/
Whatsapp : +91-8019952427
kgupta.testingtraining@gmail.com
10 Ways to Better Application-Centric Service ManagementLinh Nguyen
Many IT organizations suffer from the nagging problems of Availability and Performance Management. In this presentation we will detail 10 Ways to Better Application-Centric Service Management, particularly with SAP environments.
July webinar l How to Handle the Holiday Retail Rush with Agile Performance T...Apica
In this Q&A-style webinar, you'll learn:
1. How and why to load test at least three months prior to the holidays
2. How to integrate CI/CD into your holiday load testing
3. How to determine and evaluate load curves
How to implement an enterprise system. Tips from my experience, touching: Project Management, Preparation, Build the customer specific implementation, Prepare roll-out, Deployment & Support
In this presentation which was delivered to testers in Manchester, I help would-be performance testers to get started in performance testing. Drawing on my experiences as a performance tester and test manager, I explain the principles of performance testing and highlight some of the pitfalls.
Integration strategies best practices- Mulesoft meetup April 2018Rohan Rasane
Abstract for the Mulesoft meetup in April 2018
If your organization is in the following phases of integrations:
Looking forward to integrate or connect with other applications with a platform dedicated to integrations
Does already have an integration platform and have realized that the integrations are point to point or are highly unorganized and uncontrollable
Then this session will help you identify and explore the way to build highly scalable integrations. This session will also speak about the best practises that should be be followed while maintaining the platform. There will be a sneak peek in the resiliency patterns that I love - circuit breaker and bulkheads, an inspiration from Netflix OSS
Load testing is an important part of the performance engineering process. It remains the main way to ensure appropriate performance and reliability in production. It is important to see a bigger picture beyond stereotypical, last-moment load testing. There are multiple dimensions of load testing: environment, load generation, testing approach, life-cycle integration, feedback and analysis. This paper discusses these dimensions and how load testing tools support them.
Load testing is an important part of the performance engineering process. However the industry is changing and load testing should adjust to these changes - a stereotypical, last-moment performance check is not enough anymore. There are multiple aspects of load testing - such as environment, load generation, testing approach, life-cycle integration, feedback and analysis - and none remains static. This presentation discusses how performance testing is adapting to industry trends to remain relevant and bring value to the table.
The recent revolution in software development, including agile / iterative development, cloud computing, continuous integration, and many more, opened new opportunities for performance testing and affected its role in performance engineering. For example, early and continuous performance testing is becoming the new norm. However, performance testing in general and specific performance testing techniques should be considered in full context: including environments, products, teams, issues, goals, budgets, timeframes, risks, etc. The question is not what technique is better, the question is what technique (or what combination of techniques) to use in particular case (or, in more traditional wording, what should be performance testing strategy).
Drastic changes in the industry in recent years significantly expanded the performance testing horizon: agile development and cloud computing probably the most. Basically, instead on single way of doing performance testing (and all other were considered rather exotic), we have a full spectrum of different tests which can be done at different moments; so deciding what and when to test became a very non-trivial task heavily depending on the context. We need to create and run different tests mitigating different performance risks.
So the art of performance engineering is to find out the best strategy of combining different performance tests and other approaches to mitigate performance risks to optimize risk mitigation / costs ratio for, of course, the specific context.
This session will provide you the insights of understanding various types of performance risks, which test techniques and practices to use in a specific context to measure and evaluate system performance, and how to interpret the data derived from these tests in order to drive performance engineering excellence.
Load testing with Visual Studio and Azure - Andrew SiemerAndrew Siemer
In this presentation we will look at what web performance testing is and the various types of testing that can be performed. We will then dig into Visual Studio 2013 Ultimate to see that the Visual Studio platform is now a real contender in performance testing automation. And we will see how the Visual Studio integration with Visual Studio Online and Azure can take your web performance tests and spin up impressive load tests in a truly useful way.
Alexander Podelko - Context-Driven Performance TestingNeotys_Partner
Since its beginning, the Performance Advisory Council aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members. For Neotys, to strengthen our position as a thought leader in load & performance testing. During this event, 12 participants convened in Chamonix (France) exploring several topics on the minds of today’s performance tester such as DevOps, Shift Left/Right, Test Automation, Blockchain and Artificial Intelligence.
Grails has great performance characteristics but as with all full stack frameworks, attention must be paid to optimize performance. In this talk Lari will discuss common missteps that can easily be avoided and share tips and tricks which help profile and tune Grails applications.
Similar to Performance Assurance for Packaged Applications (20)
Multiple Dimensions of Load Testing, CMG 2015 paperAlexander Podelko
Load testing is an important part of the performance engineering process. It remains the main way to ensure appropriate performance and reliability in production. It is important to see a bigger picture beyond stereotypical, last-moment load testing. There are multiple dimensions of load testing: environment, load generation, testing approach, life-cycle integration, feedback and analysis. This paper discusses these dimensions and how load testing tools support them.
Load testing is an important part of the performance engineering process. However the industry is changing and load testing should adjust to these changes - a stereotypical, last-moment performance check is not enough anymore. There are multiple aspects of load testing - such as environment, load generation, testing approach, life-cycle integration, feedback and analysis - and none remains static. This presentation discusses how performance testing is adapting to industry trends to remain relevant and bring value to the table.
Continuous Performance Testing: Myths and RealitiesAlexander Podelko
While development process is moving towards all things continuous, performance testing remains rather a gray area. Some continue to do it in the traditional pre-release fashion, some claim 100% automation and full integration into their continuous process. We have a full spectrum of opinions of what, when, and how should be done in regard to performance. The issue here is that context is usually not clearly specified - while context is the main factor here. Depending on context, the approach may (and probably should) be completely different. Full success in a simple (from the performance testing point of view) environment doesn't mean that you may easily replicate it in a difficult environment. The speaker will discuss the issues of making performance testing continuous in detail, illustrating them with personal experience when possible.
Tools of the Trade: Load Testing - Ignite session at WebPerfDays NY 14Alexander Podelko
Tools of the Trade: Load Testing - an Ignite session at WebPerfDays NY 2014. Some consideration about load testing and selecting load testing tools - as much as could be squeezed into 5 min / 20 slides.
Load testing is an important part of the performance engineering process. It remains the main way to ensure appropriate performance and reliability in production. Still it is important to see a bigger picture beyond stereotypical last-moment load testing. There are different ways to create load; a single approach may not work in all situations. Many tools allow you to use different ways of recording/playback and programming. This session discusses pros and cons of each approach, when it can be used and what tool's features we need to support it.
Performance Requirements: the Backbone of the Performance Engineering ProcessAlexander Podelko
Performance requirements should to be tracked from system's inception through its whole lifecycle including design, development, testing, operations, and maintenance. They are the backbone of the performance engineering process. However different groups of people are involved in each stage and they use their own vision, terminology, metrics, and tools that makes the subject confusing when you go into details. The presentation discusses existing issues and approaches in their relationship with the performance engineering process.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
3. Oracle Enterprise Performance Management
• An integrated suite of applications
• The integration is good, so from the end user point of
view it may be difficult to say what components and
datasources are used
– And especially for those not quite familiar with EPM like
administrators or performance testers
• A good example of packaged business applications to
discuss performance assurance and troubleshooting
Disclaimer: The views expressed here are personal views only and do not necessarily represent those of authors’ current or
previous employers. All brands and trademarks mentioned are the property of their owners.
5. Performance Assurance
• EPM products are thoroughly tested, but they may be
used very differently
– Think about Oracle Database: It is still possible to create a
slow database in spite of the fact that the Oracle Database
software is highly optimized for performance.
• Performance Assurance
– Ongoing performance risk mitigation during the whole system
lifecycle
6. Performance Assurance Steps
• Define performance requirements
– Number of users, concurrency, what they do
• Design your applications according to best practices
• Verify performance along the way
– Single user with monitoring provides a lot of information
– Use realistic volume of data
• Do necessary tuning / configuration
7. Performance Assurance Steps - Continued
• Do performance testing
– Closely monitor the system in the process
– If results are not satisfactory, go back to tuning or design
• Adjust configuration based on performance testing
results [and re-test]
• Monitor the system in production
– Check if the pattern is the same as in testing
– Check trends
• Do performance testing of major changes and re-
designs
8. Performance Requirements
• Workload
– Number of concurrent users
• From existing systems
• Percentage of named users
– What users will do
• Throughput (how many reports, requests, etc)
• What components they use
• Performance Metrics
– Response times
– Resources utilization
9. Application Design
• Application design impacts performance drastically
• Adhere to best practices to ensure application
performance
– Should be applicable to your context
– You don’t need to follow them blindly, but it would be better to
have reasons why you need to deviate and do a Proof of
Concept to check how it will perform
– First of all, reasonable number of dimensions, number of
members, size of forms, depth of hierarchy, etc.
10. Verify Performance Along the Way
• If you see a performance issue with one user, it probably
would be much worse for multiple users
– Still may be exceptions related to latency or caching
– Tuning is not usually beneficial for single-user issues
• Except, for example, very large objects
– Hardware upgrade is usually not beneficial for single-user issues
• Except, for example, cpu speed
– It means that single-user results in the development environment
may be representative even if hardware used is significantly less
powerful
– If one user uses a lot of resources, it won’t be nice for multiple
users
11. Tuning
• If you have more than a dozen of concurrent users
you may need to do some tuning
– Some defaults are chosen to conserve resources
– Increase max Java heap size for heavily used components
– Increase Essbase index and data cache sizes
• It is not recommended to do all existing tuning
recommendation without understanding what they
mean and testing them under multi-user load
– Many recommendations are for specific cases only and may
even degrade performance otherwise.
12. Time Considerations
• If any long-running, resources-consuming tasks are
needed, schedule them for the time of minimal load
– Such as large report books, consolidations, calculations
• EPM activities usually tied to the financial cycle
– As a part of closing quarter, year, etc.
– Heavy activity during some period and low during others
• Some EPM activities depends on others
– Workload mix changes depending on the place in the
financial cycle
– May be several different workload profiles
13. Performance Testing
• EPM products are tested for performance
– But it doesn’t guarantee performance of a specific application
• Every application is different
• Performance testing of your application is a way to
alleviate performance risk
– Highly recommended for large installations
• Performance testing of EPM products is complex
– If done improperly, may easily led to wrong conclusions
– Make sure that everything is correlated/parameterized
properly if not using Oracle consulting
14. Creating Realistic Load
• Important part of performance testing is creating
realistic load
• The number of virtual users
• What users do
• Different user names, different POV, etc (script
parameterization)
15. Scripting Challenges
• Multiple variables to be correlated
– Including SSO token, repository token, etc.
– Specific for every component
• Load testing tools may not report errors when
correlation or parameterization is incorrect, but the
system behavior would be unpredictable
– Use other ways to check if the script works, such as checking
for specific server response, application logs, system state.
16. Sizing and Capacity Planning
• The Installation Start Here document provides typical
configurations for some products for 100, 500, and
1000 users with 35% concurrency
• Many difficult to formalize factors
– Use Oracle services
• Benchmarking documents are usually not good for
sizing
– Benchmarking application may be very different
• More rigorous approach is to use modeling
– Based on the amount of resources needed per transaction for
your application
– Done as a service by Oracle consultants
18. The System is Slow
• Typical knee-jerk reactions are usually not effective
– Add more hardware
• May help only if the bottleneck is lack of the specific
resource
• Even if more CPU power is needed, it is difficult to guess
where and how much without analysis
– Submitting a support Service Request
• Product defects rather rare comparing with configuration
and application design issues
• Nothing can be figured out until the problem is analyzed
and narrowed down
– Submitting a vague SR may slow down investigation
19. What Can Be a Problem?
• Lack of hardware resources
– May be network bandwidth, CPU resources, memory, I/O
• Tuning / configuration
– On all tiers including network, operating
system, storage, application
• Application design
– May be any part including metadata, forms, rules, etc.
• Product issue
– Rare comparing with above issues
20. Performance Troubleshooting
• Use Top Down approach
• Investigate step by step, narrowing the problem
• What exactly and where exactly is slow?
– Is it slow for one user?
– Does it change with time?
– What components are active (see monitoring results)?
– Do you see slowness in back end?
• For example, in logs
– What activity or data it is related too?
• For example, is it related to a specific web form?
21. Monitoring
• Ongoing monitoring of all components
– A way to check system health
– Input for future changes / capacity planning
– Input for performance troubleshooting
• May be done using most enterprise monitoring tools
– May be done with OS-level tools, although it is usually not the
best choice for ongoing production monitoring.
• What to monitor?
– System-level metrics (CPU, memory, I/O, network)
– Process-level metrics for major components (CPU, memory)
– Database metrics
22. Component Diagrams
• Understanding of how requests flow through the
system is very important for all performance-related
questions:
– Distributing components over hardware
– Monitoring
– Performance testing
– Performance troubleshooting
23. EPM Requests Flow
• The collaboration between components is really
sophisticated, but not all of them are equally critical
from the performance point of view
– Focus on high-concurrency user requests
– Simplified component diagrams are presented here to
highlight high-concurrency components
– See the “Installation Starts Here” manual for more detailed
diagrams
24. HFM Components - Simplified
Browser SmartView Win Client
OHS
Foundation
HFM Web Server
HFM App Server
Relational DB
25. Planning Components - Simplified
Browser SmartView
OHS
Foundation
Planning
Essbase
Relational DB ProviderServices
26. Financial Reporting - Simplified
Browser
Planning Essbase
OHS
Foundation R&A WebServer
FR Web App
HFM
Relational DB FR Print Server
R&A Services
27. Mapping to System Processes
• Each component may be mapped to one or several system
processes
• Most Web applications are represented by HyS9<name>
processes on Windows and Java processes on *unix.
– Use ps –ef| grep <name> on *unix to find PID
• Key processes for HFM are HsvDataSource (one per
application*) for App server, w3wp for Web server
• Key processes for Essbase are ESSSVR (one per
application*)
* application, according to the traditional EPM terminology, refers to a specific
implementation for the given product
28. Essbase Application Logs
[Fri May 13 10:56:09 2011]Local/gsi1/Plan1/admin/6844/Info(1003037)
Data Load Updated [30507] cells
[Fri May 13 10:56:09 2011]Local/gsi1/Plan1/admin/6844/Info(1003051)
Data Load Elapsed Time for [SQL] with [AIFData.rul] : [6.516] seconds
[Fri May 13 10:09:05 2011]Local/gsi1/Plan1/admin/6500/Info(1020055)
Spreadsheet Extractor Elapsed Time : [0.157] seconds
[Fri May 13 10:09:05 2011]Local/gsi1/Plan1/admin/6500/Info(1020082)
Spreadsheet Extractor Big Block Allocs -- Dyn.Calc.Cache : [0] non-
Dyn.Calc.Cache : [0]
[Fri May 13 10:12:41 2011]Local/gsi1/Plan1/admin5308/Info(1020055)
Spreadsheet Extractor Elapsed Time : [0.031] seconds
30. Service Request
• If you still believe that it is a product issue, submit SR
with full results of your analysis, monitoring details,
and logs
– Statement “system is slow” is not enough
• There may be additional tools for investigation, such
as debug and profiling flags, but they need to driven
by Oracle support
32. Exemplary Performance Issues
• Let’s discuss several typical performance problems
– Each has a recognizable pattern
– Happens often enough to be recognized
33. Examples: CPU Issues
• Using [almost] all available CPU
• May indicate lack of hardware resources
– Add more servers for the component
– Verify that adding hardware will solve the problem
34. Example: Dynamic Members in Essbase
• High Essbase CPU during concurrent reading
• Dynamic members are very useful in some cases, but
they are recalculated each time they retrieved
• That actually means that they shouldn’t be used if
retrieved concurrently
– Only in cases when they are retrieved occasionally
• Solution: make dynamic members retrieved
concurrently stored or remove them from concurrent
activities such as reports / web forms
35. Examples: Memory Issues
• Servers should have enough memory for all components
– Using all machine memory (paging, swapping) kills performance
• 32-bit application process memory is limited
– Windows 2GB (3GB in some cases)
– Memory-hungry applications may benefit from 64-bit
• Java application memory consumption is defined by JVM
settings
– Max heap size –Xmx
– Monitor actual heap size
36. Examples: I/O Issues
• Planning writes back – the same requirements as for
OLTP systems
• Relational databases could have high I/O
– HFM, FDM, ERPI
• Striping
– The best RAID performance is with striping of data across
multiple drives (RAID-0), which may be combined with
mirrored disks (RAID-0+1 or RAID-10)
– No RAID-5
• Split index from data files from control files along
different I/O channels if possible
37. Examples: Network Issues
• EPM provides rich web interface improving user
experience
– It may be not performing too well over networks with high
latency or low bandwidth (remote offices)
– It should checked in every situation when users are not in the
same LAN as servers
• For example, real network bandwidth measured and
compared with network throughput generated by a user
– If it is the case, software/hardware HTTP compression may
alleviate the problem
• Another solution maybe using Citrix/Remote Desktop
38. Scripting Example: HFM Consolidation
• Need a loop to be created in the script
web_custom_request("XMLDataGrid.asp_4",
"URL=http://{WebSrv}/hfm/Data/XMLDataGrid.asp?Action=PROCMGT
EXECUTE&TaskID={ConsolMode}&Rows=0&ColStart=0&ColEnd=0&S
elType=1&Format=JavaScript",...
do {
sleep(3000);
web_reg_find("Text=1", "SaveCount=abc_count", LAST);
web_custom_request("XMLDataGrid.asp_5",
"URL=http://{WebSrv}/hfm/Data/XMLDataGrid.asp?Action=GETCONS
OLSTATUS",…}
while(strcmp(lr_eval_string("{abc_count}"), "1") == 0);
39. Scripting Example: HFM Web Data Entry Forms
• To parameterize, we need not only department names,
but also department IDs from the repository
web_submit_data("WebFormGenerated.asp","Action=http://hfmtest.us
.schp.com/HFM/data/WebFormGenerated.asp?FormName=Tax+Q
FP",
ITEMDATA,
"Name=SubmitType", "Value=1", ENDITEM,
"Name=FormPOV", "Value=TaxQFP", ENDITEM,
"Name=FormPOV", "Value=2007", ENDITEM,
"Name=FormPOV", "Value=Periodic", ENDITEM,
"Name=FormPOV", "Value={department_name}", ENDITEM,
"Name=MODVAL_19.2007.50331648.1.{department_id}.14.409.21
30706432.4.1.90.0.345", "Value=<1.7e+2>;;", ENDITEM, LAST);
40. Summary
• Performance Assurance is ongoing performance risk
mitigation during the whole system lifecycle
– Including design, development, testing, and production
• Performance testing of your application is a way to
alleviate performance risk
• Performance testing of EPM products is not
straightforward
• Use top down approach for performance
troubleshooting
Editor's Notes
Oracle Enterprise Performance Management (EPM) System includes a suite of performance management applications, a suite of business intelligence (BI) applications, a common foundation of BI tools and services, and a variety of datasources – all integrated using Oracle Fusion Middleware.
Performance Assurance for EPM is ongoing performance risk mitigation during the whole system lifecycle. EPM products are thoroughly tested for performance, but performance of specific implementations depends on how they are designed and constructed (metadata, data, forms, grids, rules, etc.- all these artifacts are different for each implementation).
The steps listed are just an outline, some steps will be discussed in more details later in this presentation.
The main point that all these activities should continue through the whole system lifecycle and the same performance metrics should be tracked through all steps.
Performance requirements are supposed to be tracked from the system inception through the whole system lifecycle including design, development, testing, operations, and maintenance. However different groups of people are involved on each stage using their own vision, terminology, metrics, and tools that makes the subject confusing when going into details.Throughput is the rate at which incoming requests are completed. Throughput defines the load on the system and is measured in operations per time period. It may be the number of transactions per second or the number of reports per hour. In most cases we are interested in a steady mode when the number of incoming requests would be equal to the number of processed requests. The number of users doesn’t, by itself, define throughput. Without defining what each user is doing and how intensely (i.e. throughput for one user), the number of users doesn’t make much sense as a measure of load. What users do also defines what components and how intensely they use.
For example, both very deep member hierarchies and flat member hierarchies may cause issues under load.See documentation and best practices documents for details for specific applications.
Very large objects (web forms, reports) may require some tuning, like increasing JVM heap size, even for one user. Hardware upgrade (with exception of cpu speed) is usually not beneficial for single-user issues– assuming that there is no inherent issues with hardware configuration like memory is so small that it starts paging even with one user.
Multiple tuning documents are available and should be checked for details. For example:Essbase Database Administrator Guide, Optimizing EssbaseHyperion Financial Management (HFM) Performance Tuning Guide, Fusion Edition (Doc ID 1083460.1)
In cases of any long-running, resources-consuming tasks it may be more efficient just to schedule them for the time of minimal load instead of trying to tune and optimize them to run in parallel with high-concurrency load.
It is impossible to predict performance of your application without at least some performance testing.
Running multiple users hitting the same set of data (with same Point of View, POV) is an easy way to get misleading results. If it is for reporting, the data could be completely cached and we get much better results than in production. If it is, for example, for web data entry forms, it could cause concurrency issues and we get much worse results than in production. So scripts should be parameterized (fixed or recorded data should be replaced with values from a list of possible choices) so that each user uses a proper set of data. The term “proper” here means different enough to avoid problems with caching and concurrency, which is specific for the system, data, and test requirements.
Unfortunately, a lack of error messages during a load test does not mean that the system worked correctly. A very important part of load testing is workload verification. We should be sure that the applied workload is doing what it is supposed to do and that all errors are caught and logged. It can be done directly by analyzing server responses or, in cases when this is impossible, indirectly. For example, by analyzing the application log or database for the existence of particular entries.
The suggested “typical” configuration are for average applications designed according to best practices. As far as performance heavily depends on the way applications are implemented, it is difficult to properly size applications that are unique in one or more ways (and many are) without collecting at least some performance information.
Investigate before act. “Shooting in the dark” rarely helps, but adds frustration.
It may be many reasons for bad performance, including lack of hardware resources, inadequate tuning or configuration, issues with custom application design, or even an issue with the product itself (which is relatively slow). And, of course, it may be a combination of issues.
One complication may be that it could be several performance issues disguising each other. It makes investigation more difficult, but still there is no other way as identify and fix every issues one by one. No magic bullets here.
Monitoring may be done with OS-level tools (such as Performance Monitor for Windows and vmstat, ps, sar for UNIX), although it is usually nor the best choice for ongoing production monitoring. Things to monitor: system-level resource utilization metrics, process-level metrics for key processes, database metrics.
Understanding what component is doing what is very important. During performance testing, for example, you need to know what components you need to pay attention to. And, vise versa, seeing activity on a component during monitoring, you may guess what kind of workload may cause this activity.
It doesn’t mean that other components never had performance issues – it just mean that they are used mostly by few users or for one-time kinds of activities, usually with low concurrency. Due to the time limitation, only the most high-concurrency products and paths are discussed. The presentation mainly discusses the products typically having the highest concurrency in most EPM implementations: Hyperion Planning, Hyperion Financial Management, Hyperion Essbase, and reporting solutions (Hyperion Financial Reporting and Hyperion SmartView). Actually a detailed discussion even about a single product hardly may fit a single presentation timeframe, so here these products are mentioned rather as examples to illustrate the advocated approaches. Further details could be found in manuals and product-specific documents.More information in the Component Architecture documents at http://www.oracle.com/technetwork/middleware/bi-foundation/resource-library-090986.html
This is a simplified HFM component diagram for the components and flows usually involved in high-concurrency transactions. The components needed most attention from the performance point of view highlighted with yellow and red glow.The choice of components / highlighting is based on the author personal experience only and was simplified to fit presentation slides. Other components may be important from performance point of view too.OHS stands for Oracle HTTP Server.*Foundation consisted of two components, Shared Services and Workspace, before version 11.1.2.
The main components for Planning from the performance point of view are Planning Web application (a J2EE application) and Essbase as its main datastore. Relational Database is used mostly as the repository, so usually is not a bottleneck.
The main components here from the performance point of view are Financial Reporting Web application and data sources.To illustrate the importance of request flow understanding: Financial Reporting Print server is used only for pdf printing. So it is one of the most important components to monitor if pdf printing is involved and completely irrelevant if there is no pdf printing.*There were three components (Financial Reporting Web applications server, Reports Server and Scheduler Server – last two standalone Java applications) instead of single Financial Reporting Web applications server before version 11.1.2.
Each component may be mapped to one or several system processes. Most Web applications are represented by HyS9<name> processes on Windows and Java proceses on *unix. Use ps –ef| grep <name> on *unix to find PID for specific component.Key processes for HFM applications server are HsvDataSource and for Essbase - ESSSVR. One such process is spawn per application, so it may be multiple such processes (while orchestrating HsxServer and ESSBASE processes respectively usually don’t use much resources). Key process for HFM Web server is w3wp.A combination of all artifacts, including metadata, data, forms, rules, etc. is traditionally referred in EPM as an application. It creates some terminological confusion: the product itself may be referred to as an application and one specific implementation inside such product is referred as an application. Talking about performance assurance in this presentation we usually mean an implementation for the given product.
Essbase application logs provides timing for all transactions. Look for ‘Elapsed Time’ records.
Started and ended times for many HFM tasks may be found in Task Audit (data retrieval only for Financial Reporting) in most convenient form. In the logs it would be separate records for starting and for ending tasks.
The more issue would be investigated and narrowed down, the more chances that support would be able to help.
Many issues have a very recognizable pattern and happen often enough to be aware about them.
Verify that adding hardware will solve the problem. For example, if the server is maxed out with 150 users and you need to support 200 users, there is a good chance that adding a second server will solve the problem (to be sure it need to be tested). However, if the server is maxed out with 10 users and you need to support 200 users, it is better to re-visit design and tuning; adding hardware doesn’t look like a good option.
Dynamic members is an example of issues that can’t be found without multi-user workload. It may be fine with one user and expose itself only under concurrent load.
To investigate JVM memory issues in most cases you need to monitor actual heap size (that usually require additional tools, some comes with Application Servers). In some cases Java process memory may be monitored if initial –Xms and maximum –Xmx heap size set to different values, but results may be obscured by the way OS manage memory.
HTTP compression adds overheads, so it may be not a good solution for LAN users.
What each request is doing is defined by the ?Action= part. In some context/versions, during the recording, you get multiple GETCONSOLSTATUS requests, the number of GETCONSOLSTATUS requests recorded depends on the processing time. If playback such script, it will work in the following way: the script submits the consolidation in the EXECUTE request and then calls GETCONSOLSTATUS three times. If we have a timer around these requests, the response time will be almost instantaneous. While in reality the consolidation may take many minutes or even hours (yes, this is a good example when sometimes people may be happy having one hour response time in a Web application). If we have several iterations in the script, we will submit several consolidations, which continue to work in background competing for the same data, while we report sub-second response times. Consolidation scripts require creating an explicit loop around GETCONSOLSTATUS to catch the end of consolidation.
Another example is HFM Web Data Entry Forms. To parameterize such script, we need not only department names, but also department ids (which are internal representation not visible to users – should be extracted from the metadata repository). If department ids are not parameterized, the script won’t work – although no errors will be reported.