A presentation of how load and performance testing can be done in the J2EE world using open source tools
You will find things like Performance Basics (scope, metrics, factors on performance, generating load, performance reports), Monitoring (Monitoring types, active and reactive monitoring, CPU, Garbage Collection monitoring, Heap and other monitoring) and Tools (open source tools for monitoring, reporting and analysing)
Load Testing Best Practices: Application complexity is increasing, yet the stringent requirements for web performance is increasing exponentially. Learn more about the three major types of load testing, determine which you need and how to conduct them.
We all know that load testing is important, but it's all too common that it's left to the very end of a project and it's invariably the first thing that gets dropped when budgets and timeframes get cut. Furthermore, most of us don't know where or how to start implementing effective load tests, let alone how to analyse the results.
Lindsay Holmwood, Software Manager at Bulletproof Networks, will be talking about integrating performance testing into your application development + deploy cycle from the very beginning, using inexpensive and easy to use SaaS tools.
There will be a hands on demonstration of the Blitz load + performance testing tool, coupled with a brief dive into the Blitz API internals to retrieve and analyse advanced reporting information.
Load Testing Best Practices: Application complexity is increasing, yet the stringent requirements for web performance is increasing exponentially. Learn more about the three major types of load testing, determine which you need and how to conduct them.
We all know that load testing is important, but it's all too common that it's left to the very end of a project and it's invariably the first thing that gets dropped when budgets and timeframes get cut. Furthermore, most of us don't know where or how to start implementing effective load tests, let alone how to analyse the results.
Lindsay Holmwood, Software Manager at Bulletproof Networks, will be talking about integrating performance testing into your application development + deploy cycle from the very beginning, using inexpensive and easy to use SaaS tools.
There will be a hands on demonstration of the Blitz load + performance testing tool, coupled with a brief dive into the Blitz API internals to retrieve and analyse advanced reporting information.
Detailed presentation on performance testing and Loadrunner.
Complete course is available on udemy.
Use below link to get the course for just 20 USD
https://www.udemy.com/performance-testing-using-microfocus-loadrunner-basics-advanced/?couponCode=PTLR20D
Performance testing interview questions and answersGaruda Trainings
In software engineering, performance testing is in general testing performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Are you new to performance testing? This slides are for those of you who want to explore and learn where and how to start testing application performance. During this web event, our performance testing experts will reveal the key pieces and parts of performance testing, including the phases of the test and how HP LoadRunner supports each phase.
Performance Testing And Its Type | Benefits Of Performance TestingKostCare
Performance testing is in general, a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Architecting and Tuning IIB/eXtreme Scale for Maximum Performance and Reliabi...Prolifics
Abstract: Recent projects have stressed the "need for speed" while handling large amounts of data, with near zero downtime. An analysis of multiple environments has identified optimizations and architectures that improve both performance and reliability. The session covers data gathering and analysis, discussing everything from the network (multiple NICs, nearby catalogs, high speed Ethernet), to the latest features of extreme scale. Performance analysis helps pinpoint where time is spent (bottlenecks) and we discuss optimization techniques (MQ tuning, IIB performance best practices) as well as helpful IBM support pacs. Log Analysis pinpoints system stress points (e.g. CPU starvation) and steps on the path to near zero downtime.
Detailed presentation on performance testing and Loadrunner.
Complete course is available on udemy.
Use below link to get the course for just 20 USD
https://www.udemy.com/performance-testing-using-microfocus-loadrunner-basics-advanced/?couponCode=PTLR20D
Performance testing interview questions and answersGaruda Trainings
In software engineering, performance testing is in general testing performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Are you new to performance testing? This slides are for those of you who want to explore and learn where and how to start testing application performance. During this web event, our performance testing experts will reveal the key pieces and parts of performance testing, including the phases of the test and how HP LoadRunner supports each phase.
Performance Testing And Its Type | Benefits Of Performance TestingKostCare
Performance testing is in general, a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Architecting and Tuning IIB/eXtreme Scale for Maximum Performance and Reliabi...Prolifics
Abstract: Recent projects have stressed the "need for speed" while handling large amounts of data, with near zero downtime. An analysis of multiple environments has identified optimizations and architectures that improve both performance and reliability. The session covers data gathering and analysis, discussing everything from the network (multiple NICs, nearby catalogs, high speed Ethernet), to the latest features of extreme scale. Performance analysis helps pinpoint where time is spent (bottlenecks) and we discuss optimization techniques (MQ tuning, IIB performance best practices) as well as helpful IBM support pacs. Log Analysis pinpoints system stress points (e.g. CPU starvation) and steps on the path to near zero downtime.
How's relevant JMeter to me - DevConf (Letterkenny)Giulio Vian
devConf LK 2019
Letterkenny, 23 February 2019
http://bit.ly/devConfLK2019
How do compare Visual Studio Web & Load Test with JMeter? Can I replace one with the other? How hard is this open-source tool? Do I need to install and/or learn Java?
We will answer these questions and more with a practical introduction, exploring:
- Basics of JMeter
- Recording
- Collecting and analyzing results
- Tokens and parametrization
- Scenarios and distributions
- Setting up a test rig
Adding Value in the Cloud with Performance TestRodolfo Kohn
System quality attributes such performance, scalability, and availability are among the main concerns for cloud application developers and product managers. There are many examples of notable system failures that show how a company business can be affected during key events like a Cyber Monday. However, many difficulties come up when a team intends to consciously manage these type of quality attributes during development and operations. It is possible to group these difficulties in two main aspects: human aspects and technical aspects. During this presentation, I will share main technical difficulties we had to deal with in the last seven years working with different cloud services as well as key technical performance, scalability, and availability issues we were able to find and solve. It is about cases that are relevant through different products, technologies, and teams.
Maintaining and Caring for your EPM EnvironmentEmtec Inc.
This educational session from Kscope 14 presents some of the basic, yet important steps to maintain your EPM enviroment and some of the basic troubleshooting steps.
Back-end testing is an unfamiliar testing area to many testers, especially when the Back-end adopts web services technologies and has gigabytes of data need to be verified. The presentation will outlines numbers of testing activities that need to be done to deal with challenges.
Services/Domain Testing Introduction:
We have been providing automation test service for Back-end system which has web services, web application technologies and meta-data processing. The domain we has worked on is about Communication Media and Entertainment.
Challenges:
Complex business logic inside layer of data storage and processing to provide services. Different platforms under test.
Defragmented testing result so it is difficult to make decision.
Must align testing with development life cycle.
Solutions:
Apply automation testing to Continuous Integration.
Design automation test framework to deal with Shell, Web Service, Web Application, gigabytes of XML Data on Windows and Linux.
Select proper technology stack to centralize the testing result from both manual and automation teams.
Jenkins is continuous integration and continuous delivery application, as start point, run its job to build source code from development team. When unit testing for source code is passed, automated system test written by LISA is launched as flow controller for automation test framework.
The LISA’s core functionalities are to verify middleware layer, web services based on SOAP/RESTful and database. Extending LISA’ capabilities are also applied in practice to test different technologies under test such as web application by integrating with Selenium, Shell Script by JCraft and processing big data file by Xstream/JAXB.
Proactive ops for container orchestration environmentsDocker, Inc.
Break -> inspect -> fix is the Ops workflow for infrastructure stacks of the past. Distributed infrastructure and applications claim to be the new generation, but why is it so much more painful to maintain and troubleshoot them? Much of the pain comes from outdated operational models relying on reactive or, worse yet, manual monitoring and Ops.
This talk lays out a proactive Ops model for container infrastructure. By focusing on event monitoring, infrastructure state monitoring, trend analysis, and distributed log collection, a proactive Ops model delivers observability for distributed apps that was not possible before. Using real-world examples from Swarm and Kubernetes, we'll demonstrate the tools used and how we relieve Ops pain in container orchestration.
Using Semi-supervised Classifier to Forecast Extreme CPU Utilizationgerogepatton
A semi-supervised classifier is used in this paper is to investigate a model for forecasting unpredictable
load on the IT systems and to predict extreme CPU utilization in a complex enterprise environment with
large number of applications running concurrently. This proposed model forecasts the likelihood of a
scenario where extreme load of web traffic impacts the IT systems and this model predicts the CPU
utilization under extreme stress conditions. The enterprise IT environment consists of a large number of
applications running in a real time system. Load features are extracted while analysing an envelope of the
patterns of work-load traffic which are hidden in the transactional data of these applications. This method
simulates and generates synthetic workload demand patterns, run use-case high priority scenarios in a test
environment and use our model to predict the excessive CPU utilization under peak load conditions for
validation. Expectation Maximization classifier with forced-learning, attempts to extract and analyse the
parameters that can maximize the chances of the model after subsiding the unknown labels. As a result of
this model, likelihood of an excessive CPU utilization can be predicted in short duration as compared to
few days in a complex enterprise environment. Workload demand prediction and profiling has enormous
potential in optimizing usages of IT resources with minimal risk.
USING SEMI-SUPERVISED CLASSIFIER TO FORECAST EXTREME CPU UTILIZATIONijaia
A semi-supervised classifier is used in this paper is to investigate a model for forecasting unpredictable load on the IT systems and to predict extreme CPU utilization in a complex enterprise environment with large number of applications running concurrently. This proposed model forecasts the likelihood of a scenario where extreme load of web traffic impacts the IT systems and this model predicts the CPU utilization under extreme stress conditions. The enterprise IT environment consists of a large number of applications running in a real time system. Load features are extracted while analysing an envelope of the patterns of work-load traffic which are hidden in the transactional data of these applications. This method simulates and generates synthetic workload demand patterns, run use-case high priority scenarios in a test environment and use our model to predict the excessive CPU utilization under peak load conditions for validation. Expectation Maximization classifier with forced-learning, attempts to extract and analyse the parameters that can maximize the chances of the model after subsiding the unknown labels. As a result of this model, likelihood of an excessive CPU utilization can be predicted in short duration as compared to few days in a complex enterprise environment. Workload demand prediction and profiling has enormous potential in optimizing usages of IT resources with minimal risk.
USING SEMI-SUPERVISED CLASSIFIER TO FORECAST EXTREME CPU UTILIZATIONgerogepatton
A semi-supervised classifier is used in this paper is to investigate a model for forecasting unpredictable load on the IT systems and to predict extreme CPU utilization in a complex enterprise environment with large number of applications running concurrently. This proposed model forecasts the likelihood of a scenario where extreme load of web traffic impacts the IT systems and this model predicts the CPU utilization under extreme stress conditions. The enterprise IT environment consists of a large number of applications running in a real time system. Load features are extracted while analysing an envelope of the patterns of work-load traffic which are hidden in the transactional data of these applications. This method simulates and generates synthetic workload demand patterns, run use-case high priority scenarios in a test environment and use our model to predict the excessive CPU utilization under peak load conditions for validation. Expectation Maximization classifier with forced-learning, attempts to extract and analyse the parameters that can maximize the chances of the model after subsiding the unknown labels. As a result of this model, likelihood of an excessive CPU utilization can be predicted in short duration as compared to few days in a complex enterprise environment. Workload demand prediction and profiling has enormous potential in optimizing usages of IT resources with minimal risk.
Similar to Load and Performance Testing for J2EE - Testing, monitoring and reporting using Open Source Tools (20)
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Load and Performance Testing for J2EE - Testing, monitoring and reporting using Open Source Tools
1. LOAD & PERFORMANCE TESTING FOR J2EE
Alexandru Ersenie
Senior Load & Performance Test Engineer
Edict eGaming GmbH, Hamburg
alexandru.ersenie@googlemail.com
http://www.alexandru-ersenie.com
TESTING, MONITORING, ANALYSING AND REPORTING USING OPEN SOURCE TOOLS
2. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 2
AGENDA
SAMPLING/PROFILING
3 TIER TUNING
VIRTUALIZATION
TESTING IN THE CLOUD
SECURITY
LOGGING
BEST PRACTICES
NETWORKING
........
PERFORMANCE BASICS
Scope
Metrics
Factors on Performance
Performance Reports
Generating Load
I
MONITORING
Monitoring types
CPU Monitoring
Other Performance Monitors
Reactive Monitoring/Reporting
II
JVM GC and Heap Monitoring
TOOLS
Monitoring
Reporting
Analysis
III
4. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 4
1.1 DEFINITION AND SCOPE
PERFORMANCE
Short response time for a given piece of
work
High throughput (rate of processing work)
Low utilization of computing resources
High availability of the computing system
or application
Efficient usage of
hardware resources;
Storage
Continue to operate
despite errors
SCALABILITY
Handle growing amount
of work;
Ability to enlarge
to accommodate more
work
EFFICIENCY RELIABILITY
Recover after
failure;
Mean time
between failures
ROBUSTNESS
Non functional
requirements
5. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 5
1.1 REAL WORLD NUMBERS
1 second of slower performance on pages could cost Amazon $1.6 billion in sales each year
25% of users will leave a site if a page takes more than 4 seconds to load.
http://performance-testing.org/performance-testing-statistics
http://en.wikipedia.org/wiki/BigTable
RESPONSE TIMES
Facebook serves more than 2 million ‘Like’ buttons per second (June 2010).
Facebook held a clear lead in total page views during March 2011, recording about 85
billion. This was more than three times as many as number two Google, which had about
25.6 billion
THROUGHPUT
Google uses a compressed, high performance, proprietary data storage file system, (...)
designed to scale into the petabyte range (1000 terabytes)
EFFICIENCY
7. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 7
1.2 METRICS
OPTIMISTIC
SLA are already defined
Concurrent users
Transactions / Second
Response Times
Storage
Concurrent users
10 000 logged in users
50 000 visitors
Transactions / Second
1 000 business transactions / second
4 000 web requests / second
Response Times
Landing page total load time less than
8 seconds for 70 % of the users
Transaction Response time less than
2 seconds for 90 % of all transactions
Storage
Maximum 100 bytes
per transaction
TEST TO ACCHIEVE DEFINED SOFTWARE LEVEL AGREEMENTS
How many transactions can
the system handle
How many sessions can
the system handle
What is the average
response size
What is the 90 % value /
business case
How much space will a
transaction use in the database
REAL WORLD PROJECTS
SLA are not defined
Concurrent users
Transactions / Second
Response Times
Storage
?
TEST TO DEFINE
SOFTWARE LEVEL AGREEMENTS
?
8. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 8
RESPONSE TIMES
What are the response times that users
are receiving when performing specific
transactions in the system:
Minimum / Maximum / Average response
time
50 to 90 percent line
1.2 METRICS
THROUGHPUT
How many users can the system handle,
and how many transactions can the
system handle in a unit of time
Transactions per second
As the number of users using the system increases,
the throughput increases as well. The system is busy
satisfying user requests
When the systems limit is reached, the throughput
decreases, and waiting occurs, since the system is
busy managing itself in order to satisfy all user requests
10. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 10
SOFTWARE
Software design, architecture and configuration have great impact on performance:
PROGRAMMING CONFIGURATION
JVM
Config
Thread
Pool
EJB
container
JDBC
Max-connections=”5”;
jdbc-connection-timeout=”5”
Sync.
errors
Deadlocks
Race conditions
Arithmetic
errors
Buffer overflow
Arithm. exceptions
Redundant
operations
Dead code
Redundant assign
Other
-Xmx:2G
-XX:NewSize:1.8G
-XX:NewSize:1.8G
max-threadpoolsize=”5”
Max-cachesize=”512”;
cache-timeout=3600
Memory
leaks
80 % of all performance
problems in JAVA
1.3 FACTORS ON PERFORMANCE
11. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 11
HARDWARE
WINDOWS5000
cons
LINUX
1024
Open
files
Database Server and Hardware
Hardware /
Configuration
OS /
File System
1 Server
APP
Server
Web
Server
DB
Server
Single 3 Tier Server
DB Server /
Hardware
Oracle
Standard
CPU
CPU
Oracle
Enterprise
CPU
CPU
CPU
CPU
CPU
CPU
●No partitioning
●Limited online
operations
●Limited indexing
MSSql MySQL
●Different locking
implementation
●Different
Index performance
●default range for dynamic
ports in Windows is 1024
to 5000
●Unix systems have a
default maximum open
files limit of 1024
1.3 FACTORS ON PERFORMANCE
Shared resources:
●CPU
●Memory
●Network
●Disk
Virtualization
13. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 13
Web
&
Application
Server
LOAD
AGENT
LOAD
AGENT
LOAD
AGENT
LOAD
AGENT
LOAD
DISPATCHER
1.4 GENERATING LOAD
WHAT DO WE NEED?
Test Results
DB
Reporting
Server
Reports
MONITORS
14. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 14
1.4 GENERATING LOAD
IMPLEMENTATION
MODEL
Web
&
Application
Server
JMETER
JMETER
JMETER
JMETER
JMETER
CONTROLLER
MY SQL
DB
JASPER
SERVER
JASPER
REPORTS
JMX;
REST
MONITORS
15. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 15
LOAD AGENTS
Virtual Users
Ramp up time
Pause times
Number of transactions
Number of repetitions
Increasing rate/repetition
Server and Port
MONITORS
Object usage
EJB Resources
JDBC Resources
CPU Time
HEAP Monitoring
REPORTING
Process for DB Import
Process for Maven
Process for other formats
Generate Load
Collect Response Times
Monitor Hardware Resources
Monitor System Resources
Process Results
Import results into database
Generate Reports
DISPATCHER
1.4 GENERATING LOAD
'users=200 ramptime=200 pausetime=2000
pausetimedev=500 transactions=100 repeats=1
loopsinrepeat=1 warmup=no'
'hostname=myserver port=8080 protocol=https'
'trace_objects=yes
monitor_server=yes'
'generatereport=yes '
/start_test.sh
16. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 16
FUNCTIONS
configure_server
empty_logs
monitor_resources
generate_report
start_test
server_warmupJMETER DISPATCHER
FOLDER
/includes
/scripts
SCRIPT REPO
PLACE ORDER
EXPORT
LOGIN
HOMEPAGE
REGISTRATION
Scripts call
functions to
control the test
workflow
/testplans
TESTPLAN REPO
T_Place_Order
T_Export
T_Login
T_Homepage
T_Registration
Scripts start test
plans by using
functions
/results
RESULTS
Results are
stored here
MY SQL
DB
Processed
results are
exported to
a database
JASPER
REPORTS
Reports are
generated
from the DB
1.4 GENERATING LOAD
IMPLEMENTATION
MODEL
20. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 20
Allows real time monitoring of critical resources
Active Monitoring
CPU UsageCPU Usage Heap UsageHeap Usage
Thread UsageThread UsageAllows real time interfering with the system:
Execute Garbage Collection
Generate Thread Dump
Analyze thread activity
Generate Heap Dump
Enables monitoring the system's limits
Allows a better understanding of how the system behaves depending on the load scenario, and the
user behavior, thus enabling to determine what the system can perform
Allows quickly modifying either the test configuration,or the software/hardware configuration, repeat the
test and compare the results
Allows parallel analysis of the system and the load scenario, thus enabling to identify root causes and
behaviors quickly
JDBC Usage EJB Cache Usage
2.1 Monitoring Types
AJP Thread UsageAJP Thread Usage
21. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 21
Object Usage
Reactive Monitoring
2.1 Monitoring Types
Throughput Response Times
Resource StatisticsGarbage Collection
Allows reactive analyze and detailed interpretation of
collected data
Data collectors can easily be extended using a modular approach (function based):
●Monitor JDBC; Monitor EJB; Monitor Thread Usage; Monitor Heap; etc
1. Enables monitoring the system's
behavior and resources over a
prolonged period of time
2. Easily extendable by adding collectors
3. Storable information for future comparing
4. Tool independent
5. High definable granularity
6. Automated
ADVANTAGES
1.High amount of data and complexity in
analyzing the results
2.Requires high proficiency with tools,
monitors, collectors for the purpose
of extending the monitoring system
DISADVANTAGES
23. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 23
2.2 Reactive Monitoring
RESPONSE TIMESWhat are the for specific transactions
THROUGHPUTWhat is the maximum for specific transactions
RESOURCE USAGEWhat is the for a load scenario
GARBAGE COLLECTIONHow efficient is the for a load scenario
50% line60% line70% line80% line90% line
Min
Resp
Time
Avg
Resp
Time
Max
Resp
Time
No. of
transactions
Average
Trans /
Second
Max
Trans /
Second
CPU HEAP JDBC EJB WEB OTHER.................
.............
.............
NO OF
YOUNG
GC
NO OF
OLD
GC
DURATION
OF
GC
%
IN
GC
OTHER.................
OTHER
OTHER
24. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 24
RESPONSE TIMES
THROUGHPUT
2.2 Reactive Monitoring
25. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 25
RESPONSE TIMES DRILL
THROUGHPUT DRILL
2.2 Reactive Monitoring
26. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 26
RESOURCES
CPU HEAP
JDBC CONNECTIONS
2.2 Reactive Monitoring
27. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 27
GARBAGE COLLECTION
2.2 Reactive Monitoring
28. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 28
GARBAGE COLLECTION STATISTICS
2.2 Reactive Monitoring
29. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 29
OBJECT USAGE
2.2 Reactive Monitoring
See the behavior of objects in time
Drill down on the object to see detailed information like Instances and Bytes occupied
30. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 30
2.2 Reactive Monitoring
OBJECT USAGE - PRIMITIVES
See the behavior of objects in time
Drill down on the object to see detailed information like Instances and Bytes occupied
31. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 31
JMX
Java Management Extensions
Supplies tools for managing/monitoring:
● Applications
●,System objects
● Devices
● Service oriented networks.
Resources are represented by
objects called MBeans
REST MONITORING
Monitoring and management data
Exposed by the application server
(example: Glassfish)
SYSTEM
●Top
●Vmstat
●Iostat
●jmap
SERVER LOGS
Garbage Collection Logs
COLLECTORS
LOAD GENERATOR
●Transactions
●Response Times
●Response Codes
●Response Sizes
MY SQL
DB
2.2 Reactive Monitoring
33. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 33
INSERT MOVIE WITH RUNNING TEST AND
MONITORING
34. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 34
●Represents the first input in identifying throughput decrease
Allows real time monitoring of CPU usage
Allows identifying hotspots(spikes) and relate them in time using graphical timestamp representation
Identify Garbage Collections and their effects on CPU Usage
Allows creating “Snapshots” and future reactive analysis based on CPU usage patterns
Allows identifying outside contributors (Database, JMS, I/O)
2.3 Monitoring Types – CPU Monitoring
35. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 35
Server resources properly configured – EJB Cache, EJB Pool, Thread Pool, JDBC
Connection Pool
OPTIMAL BEHAVIOR
2.3 Monitoring Types – CPU Monitoring
Load configuration adapted for optimum usage of resources: Requests are being
Processed, JDBC connections are used optimally
36. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 36
Depending on the configured Garbage Collection Strategy, on the Heap usage of your application, and the
properly configuration of the Heap according to business scenarios and memory footprints, Garbage Collection
can have a huge impact on throughput
GARBAGE COLLECTION
Parallel Collections with only two CPU
Stop the world strategy instead of CMS
Poor Heap Configuration (too small, too big, wrong rations of young to old
Manual Garbage Collections
2.3 Monitoring Types – CPU Monitoring
37. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 37
Long running queries – Occupying the JDBC resource for too long. Requests are being queued, translating
in decreased throughput and increased response times
Oracle Redo Log failed checkpoints – Database stops servicing for satisfying internal needs
Pause
Redo Log Statistics
Average of 1 Redo Log File / Minute : Redo Configuration needs update
DATABASE WAITS
2.3 Monitoring Types – CPU Monitoring
38. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 38
Number one cause for thread contention: wait times induced by synchronized methods (threads have to wait in
a queue for acquiring the lock)
Contention
THREAD CONTENTION
2.3 Monitoring Types – CPU Monitoring
40. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 40
Allows real time monitoring of HEAP usage
Allows identifying memory leak trends
Correlate Garbage Collection with CPU Usage:
CPU usage
Time in GC
Overall system behavior on GC
Determine Garbage Collection rates and durations
2.4 Monitoring Types – HEAP Monitoring
HEAP MONITORING
41. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 41
PERMANENT
GENERATION
YOUNG
GENERATION
OLD GENERATION
●Internal representations of the
JAVA classes
●Objects describing classes and
methods
●Information used for
optimization by the JIT
compilers
●Initially objects are allocated in
the Young Generation
●When collected, objects are
moved between survivors
(default 32 collections)
●Objects that have survived the
maximum allowed number of
collections
●Dead objects waiting to be
collected
S1 S2
1 16
17 32
JVM HEAP STRUCTURE
2.4 Monitoring Types – HEAP Monitoring
44. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 44
MEMORY LEAK ?
Used Memory increases as the number of
live objects surviving collections increases
(particularly under load)
An increasing trend is not necessarily a
memory leak. Objects can survive several
collections (max threshold), and can be
already dead, waiting for the old collector
In test systems wait for several full
garbage collections and build a trend line
in order to see if a memory leak is showing
Wait for a full garbage collection to see
if the memory decreases
Test systems can be provided with the
option of manually triggering a garbage
collection to see if objects are being
released after the test is over
2.4 Monitoring Types – HEAP Monitoring
45. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 45
A full Garbage Collection goes over all
generations, parsing the entire object
structure to identify removable objects
Usually the old generation is at least
twice the size of the young generation
Collection time for the old generation
significantly increased (object tree parsed)
Dead or marked for GC objects are
removed
Space is reclaimed
Garbage Collection strategy of critical
importance: Parallel or Concurrent GC
NO LEAK
2.4 Monitoring Types – HEAP Monitoring
46. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 46
Objects that cannot be collected are moved
to the old generation
Old Garbage Collections cannot remove
the objects; space cannot be reclaimed
Total Heap used increases constantly
Eventually the system performs only
Garbage Collection
Used Memory increases as the number of
live objects surviving collections increases
(particularly under load)
The system is busy managing itself,
instead of running your application
Both throughput and response times
affected
Server restart is required
MEMORY LEAK
The right time to trigger a HEAP DUMP
and see what objects are leaking
ACTION
2.4 Monitoring Types – HEAP Monitoring
47. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 47
ACTIVE MONITORING
OTHER MONITORS
2.5
48. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 48
User Profile
Session Information
File contents
Caching shared data in a Hash Map
Every request will want to acquire and
hold the lock on the hash map,
becoming a bottleneck
Brian Goetz: Threading lightly : Reducing Contention
Synchronized Methods
One thread is executing a synchronized
method for an object, all other threads that
invoke synchronized methods for the same
object block (suspend execution) until the first
thread is done with the object.
Java Tutorials - Concurrency
Contention
Threads can be monitored, and blocking
or long running threads can be identified
THREAD CONTENTION
2.5 Monitoring Types – Other Monitors
49. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 49
^
Most often contention problems appear
only under load, due to concurrency effects
(more requests on the same
object/method)
The right time to trigger a thread dump
and see the root cause of contention
ACTION
BEHAVIOR
CPU usage drops dramatically, for it is
busy with solving the locks instead of
satisfying business requests
Several threads in “Monitor” state, at
the same time, indicate thread
contention
THREAD CONTENTION
2.5 Monitoring Types – Other Monitors
50. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 50
Allows quickly identifying
blocked/blocking threads
Lists all existing threads and their
state when the thread dump was
triggered
Threads are listed together with
their stack traces, allowing
identifying of:
What resource is being locked
Who locks the resource?
What is the locking thread
doing?
THREAD DUMPS
2.5 Monitoring Types – Other Monitors
51. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 51
THREAD DUMPS
2.5 Monitoring Types – Other Monitors
Kill -3 PID : writes
stack trace to
jvm.log
jvm.log can be
imported into IBM
Thread Analyzer
52. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 52
Actively monitor the number of
JDBC resources in use, and other
statistics like:
● Connections created
● Connections removed
● Connections in pool
JDBC MONITORING
Actively monitor the number of EJB
Beans in cache and other statistics
like:
● Cache hits
● Passivation statistics
● Number of beans created
● Number of beans removed
EJB MONITORING
Actively monitor the number of
requests, and other statistics like:
● Error count
● Max request time
WEB MONITORING
2.5 Monitoring Types – Other Monitors
53. 07/25/12 Alexandru Ersenie - Load & Performance Testing for J2EE 53
Database Call on each Transaction – In this case each transaction checks if the user is valid, by creating
and using a new JDBC Connection / check;
2.5 Monitoring Types – Case Study