This document discusses various performance testing methodologies. It begins by introducing performance testing as a subset of performance engineering aimed at building performance into a system's design. The document then describes different types of performance testing including load testing, stress testing, endurance/soak testing, spike testing, and configuration testing. It emphasizes that performance testing validates attributes like scalability and resource usage under various loads.
This presentation throws light on some of the essential elements of web testing which have become crucial to ensure quality in this day and age. To know more on web testing, Graphical User Interface Testing, workflow testing along with Cross Browser or Compatibility testing, go through this presentation as well as the ones coming soon.
Infographic: Importance of Performance TestingKiwiQA
Performance testing help to establish the scalability, speed, and stability of apps. It includes numerous testing types that simulate user scenarios and analyze app performance.
To learn more about performance testing, visit: https://www.kiwiqa.com/load-performance-testing.html
This presentation throws light on some of the essential elements of web testing which have become crucial to ensure quality in this day and age. To know more on web testing, Graphical User Interface Testing, workflow testing along with Cross Browser or Compatibility testing, go through this presentation as well as the ones coming soon.
Infographic: Importance of Performance TestingKiwiQA
Performance testing help to establish the scalability, speed, and stability of apps. It includes numerous testing types that simulate user scenarios and analyze app performance.
To learn more about performance testing, visit: https://www.kiwiqa.com/load-performance-testing.html
With a pre-requisite of ensuring an application's flawless functioning, this PPT sheds light on what functional testing entails with its importance to enhance an application's quality. Get to know more on Functional Testing Services, Functional Testing Types, Smoke Testing, Sanity Testing, Regression Testing with this presentation and stay tuned for our upcoming ones.
It's a very basic introduction of Load Runner for beginners, i explored it at my own, prepared slides & shared it with my colleagues.
What is Load Runner & why we need Performance testing etc.
Enjoy :)
The realm of mobile computing is composed of various types of mobile devices and their underlying software. Enabling or writing new software for mobile phones, or portable devices has become a new vertical in software development and testing. Smart phones are getting user-friendlier and day-to-day, new apps are being released to satisfy daily user needs. More and more user-friendly apps enable greater user interactions using stylus, touch-based gestures, multi-touch gestures, motion gestures etc. These introduce lot of challenges in development and testing. This document details the approach for mobile testing and the key focus areas for testing.
Hey folks,
Please find attached file with concept of window application or Desktop application testing concept, how it differ from client server application, what type of testing should be carried out on window application, how to perform it and related checklists etc.
hope this will be helpful to newbie of testing in window application.
Thanks,
Trupti
With a pre-requisite of ensuring an application's flawless functioning, this PPT sheds light on what functional testing entails with its importance to enhance an application's quality. Get to know more on Functional Testing Services, Functional Testing Types, Smoke Testing, Sanity Testing, Regression Testing with this presentation and stay tuned for our upcoming ones.
It's a very basic introduction of Load Runner for beginners, i explored it at my own, prepared slides & shared it with my colleagues.
What is Load Runner & why we need Performance testing etc.
Enjoy :)
The realm of mobile computing is composed of various types of mobile devices and their underlying software. Enabling or writing new software for mobile phones, or portable devices has become a new vertical in software development and testing. Smart phones are getting user-friendlier and day-to-day, new apps are being released to satisfy daily user needs. More and more user-friendly apps enable greater user interactions using stylus, touch-based gestures, multi-touch gestures, motion gestures etc. These introduce lot of challenges in development and testing. This document details the approach for mobile testing and the key focus areas for testing.
Hey folks,
Please find attached file with concept of window application or Desktop application testing concept, how it differ from client server application, what type of testing should be carried out on window application, how to perform it and related checklists etc.
hope this will be helpful to newbie of testing in window application.
Thanks,
Trupti
Developments
Product
Introduction of SoftLock
Introduction of BUSY on Rent
Introduction of BUSY 14 & 16
Sales & Business Development
Tie-up with ICAI
Result:
Major expansion in Partner Network (CP, ATC)
Focus on developing BUSY Solution Partners
Solution Partners becomes more and more important as most of the enterprise cases would require some customization.
Promotion of MCP Concept
Channel Helpdesk
Marketing
TV Advertisement
Digital Marketing
Various initiatives:
Social Media Marketing
Google Adword Campaigns
E-Mailer Campaigns
Awards & Recognition
AC / Nielson & NASSCOM Survey
BUSY rated amongst top 5 most popular software for SMEs
Prestigious Installations
Renowned Brands / Companies
List of Brands
Large Installations
List of Companies
Opportunities
Upgrades
Scheme on upgrade till 29th Feb, 2016
GST
The biggest tax reform in independent India
How to prepare:
Enhancing the Delivery Capability by investing in Technical Manpower. Have a little extra capacity at entry level, and invest in their training
Strengthening of Internal Processes like enquiry management, follow-ups, support management
Product Promotion through various means like End User Meets / Accountant Meets / Industry Association Meets / Exhibitions etc.
3rd Party Add-Ons
DMS
Production ERP
Customer Loyalty Program
Restaurant Management System.
BUSY Mobile App.
Future Plans
Product
Goal is to establish BUSY as the enterprise solution of choice for SMEs.
BUSY Education
Important dimension missing in our eco-system.
BUSY Solution Partner Network
As we are aiming to increase our enterprise business, we need to expand the Solution Partner network.
Thank you.
Solutions Catalog # 3 by ISIS Papyrus Software
Learn More about successful Customer Implement
in various Industries and how the Papyrus Platform for
Inbound and Outbound Business Communication and
Process Management will make your organization
more flexible, efficient and responsive to customer needs.
This is chapter 1 of ISTQB Specialist Performance Tester certification. This presentation helps aspirants understand and prepare the content of the certification.
BugRaptors Perform performance testing using different types of tools helps determining how fast some aspect of a system performs under a particular workload. It can help different purposes like it demonstrates that the system meets performance criteria in any condition.
Ensuring Effective Performance Testing in Web Applications.pdfkalichargn70th171
A 2022 report by Gartner noted that 25% of users will spend one hour per day in the metaverse. Draw your attention to the trend this statistic highlights. Users are more likely to spend their waking hours online than otherwise.
CHAPTER 15Security Quality Assurance TestingIn this chapter yoJinElias52
CHAPTER 15
Security Quality Assurance Testing
In this chapter you will
• Explore the aspects of testing software for security
• Learn about standards for software quality assurance
• Discover the basic approaches to functional testing
• Examine types of security testing
• Explore the use of the bug bar and defect tracking in an effort to improve the SDL process
Testing is a critical part of any development process and testing in a secure development lifecycle (SDL) environment is an essential part of the security process. Designing in security is one step, coding is another, and testing provides the assurance that what was desired and planned becomes reality. Validation and verification have been essential parts of quality efforts for decades, and software is no exception. This chapter looks at how and what to test to obtain an understanding of the security posture of software.
Standards for Software Quality Assurance
Quality is defined as fitness for use according to certain requirements. This can be different from security, yet there is tremendous overlap in the practical implementation and methodologies employed. In this regard, lessons can be learned from international quality assurance standards, for although they may be more expansive in goals than just security, they can make sense there as well.
ISO 9216
The International Standard ISO/IEC 9216 provides guidance for establishing quality in software products. With respect to testing, this standard focuses on a quality model built around functionality, reliability, and usability. Additional issues of efficiency, maintainability, and portability are included in the quality model of the standard. With respect to security and testing, it is important to remember the differences between quality and security. Quality is defined as fitness for use, or conformance to requirements. Security is less cleanly defined, but can be defined by requirements. One issue addressed by the standard is the human side of quality, where requirements can shift over time, or be less clear than needed for proper addressing by the development team. These are common issues in all projects, and the standard works to ensure a common understanding of the goals and objectives of the projects as described by requirements. This information is equally applicable to security concerns and requirements.
SSE-CMM
The Systems Security Engineering Capability Maturity Model (SSE-CMM) is also known as ISO/IEC 21827, and is an international standard for the secure engineering of systems. The SSE-CMM addresses security engineering activities that span the entire trusted product or secure system lifecycle, including concept definition, requirements analysis, design, development, integration, installation, operations, maintenance, and decommissioning. The SSE-CMM is designed to be employed as a tool to evaluate security engineering practices and assist in the definition of improvements to them. The SSE-CMM is organized into p ...
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
1.
Performance
Testing
Methodologies
By
Marius
Brecher
1. Introduction
Performance
testing
is
a
subset
of
performance
engineering;
an
emerging
computer
science
practice
which
strives
to
build
performance
into
the
design
and
architecture
of
a
system,
often
prior
to
the
start
of
the
actual
coding
effort.
Performance
testing
can
serve
different
purposes.
It
can
demonstrate
that
a
system
meets
its
performance
criteria,
it
can
compare
performance
between
two
systems
and
it
can
identify
which
components
of
the
system
might
cause
bottlenecks
or
have
an
overall
impact
on
performance.
Performance
testing
can
also
serve
to
validate
and
verify
other
quality
attributes
of
the
system
such
as
scalability,
reliability
and
resource
usage;
whilst
helping
to
tune
the
system
to
best
handle
real
production
load
without
performance
impacts.
The
most
important
thing
to
note
about
the
different
types
of
performance
testing
(SVP
-‐
Stress,
Volume
and
Performance)
is
that
it
is
not
trying
to
find
functional
defects.
SVP
needs
to
be
able
to
execute
end-‐to-‐end
business
transactions
using
different
volumes
and
scenarios
to
determine
how
well
some
aspects
of
the
system
perform
under
any
given
load.
It
is
critical
to
the
cost
of
new
projects
that
performance
test
efforts
are
included
in
the
early
stages
of
development
and
extends
through
to
deployment.
The
later
a
performance
defect
is
detected,
the
higher
the
cost
of
remediation.
Monitoring
is
needed
to
ensure
there
is
proper
feedback
validation
and
that
the
system
meets
the
NFR
specified
performance
metrics.
An
appropriately
defined
Monitoring
Process
specifies
the
planning,
design,
installation,
configuration
and
control
of
the
monitoring
subsystem.
2.
The
benefits
are
as
follows:
1. You
can
establish
service
level
agreements
at
the
‘use
case’
level.
2. You
can
turn
on
and
off
monitoring
at
periodic
points,
or
to
support
problem
resolution.
3. You
can
generate
regular
reports.
4.
You
have
the
ability
to
track
trends
over
time
–
such
as
the
impact
of
increasing
user
loads
and
growing
data
sets
on
use
case
level
performance.
The
trend
analysis
component
of
Performance
Monitoring
should
not
be
undervalued.
This
functionality,
when
properly
implemented
and
used
will
enable
the
prediction
of
the
application’s
performance
degradation
and
system
stress,
whilst
gradually
increasing
transaction
volumes
and
user
concurrency.
Observing
this
behaviour
in
the
early
stages
of
development
and
testing
will
enable
proper
management
budgeting,
deployment
of
the
required
resources
to
maintain
the
necessary
system,
running
within
the
limits
of
business
requirements
and
overall
cost
reduction.
2.
Types
of
Performance
Testing
2.1
Load
testing
Load
testing
is
the
simplest
form
of
performance
testing.
A
load
test
is
usually
conducted
to
understand
the
behaviour
of
the
system
under
a
specific
expected
load.
This
load
can
be
the
expected
concurrent
number
of
users
on
the
application,
performing
a
specific
number
of
transactions
within
the
set
duration.
This
test
will
provide
the
response
times
of
all
business
critical
transactions.
If
the
database
or
application
server,
etc.
are
also
monitored,
this
simple
test
can
also
determine
any
bottlenecks
in
the
application
software.
2.2
Stress
testing
Stress
testing
is
normally
used
to
understand
the
upper
limits
of
capacity
within
the
system.
This
kind
of
test
is
done
to
determine
the
system's
robustness
in
terms
of
extreme
load
and
helps
application
administrators
make
sure
the
system
will
perform
sufficiently
if
the
current
load
goes
well
above
the
expected
maximum.
2.3
Endurance
testing
(soak
testing)
Endurance
testing
is
usually
done
to
determine
if
the
system
can
sustain
the
continuous
expected
load.
During
endurance
tests,
memory
utilization
is
monitored
to
detect
potential
leaks.
What
is
also
important
but
often
overlooked,
is
performance
degradation.
This
test
ensures
that
the
throughput
and/or
response
times
after
a
long
period
of
sustained
activity
are
as
good
if
not
better
than
at
the
beginning
of
the
test.
It
3.
essentially
involves
applying
a
significant
load
to
a
system
for
an
extended
period
of
time.
The
goal
is
to
discover
how
the
system
behaves
under
sustained
use.
2.4
Spike
testing
Spike
testing
is
achieved
though
suddenly
and
significantly
increasing
the
number
of
users
or
load
generated
by
users
and
observing
the
consequent
behaviour
of
the
system.
The
goal
is
to
determine
whether
the
system
is
equipped
to
handle
dramatic
changes
in
load.
2.5
Configuration
testing
Rather
than
testing
for
performance
from
the
perspective
of
load,
tests
are
created
to
determine
the
effects
of
configuration
changes
to
the
system's
components
on
performance
and
behaviour.
A
common
example
would
be
experimenting
with
different
methods
of
load-‐balancing.
2.6
Isolation
testing
Isolation
testing
is
not
unique
to
performance
testing
but
is
a
term
used
to
describe
repeating
a
test
execution
that
resulted
in
a
system
problem.
It
is
often
used
to
isolate
and
confirm
the
fault
domain.
3.
Prerequisites
To
achieve
successful
and
accurate
performance
testing
results
the
following
key
steps
should
be
followed:
3.1 Understand
your
testing
environment
It
is
important
to
understand
your
test
environment
capacity,
including
its
limitations
and
stability
in
comparison
to
its
production
environment.
Ideally,
performance
testing
will
be
conducted
on
a
pre-‐production
environment
or
an
environment
which
is
a
mirror
of
production,
but
that
won’t
be
the
reality
in
most
cases.
Generally
it
is
the
performance
engineer’s
responsibility
to
maintain
communication
with
administrators
and
architects
and
to
understand
the
exact
differences
between
the
two
environments
in
terms
of
hardware,
database
capacity
and
general
configuration.
Equally
it
will
be
their
responsibility
to
set
up
or
match
the
production
environment
as
closely
as
possible.
All
business
and
technical
stakeholders
should
be
aware
of
the
differences
between
the
two
environments,
and
understand
the
potential
need
to
scale
down
the
performance
test
and
extrapolate
results.
4.
3.2
Understand
your
system
configuration
Knowing
the
system
and
application
configuration
is
also
a
very
important
part
of
performance
testing
planning,
and
needs
to
be
understood
by
the
performance
test
engineers.
The
different
system
components
and
services
will
need
to
be
identified
for
two
key
reasons:
a) Identifying
the
system’s
different
components
and
services
used
by
the
application
allows
the
performance
engineer
to
plan
test
scenarios
and
understand
how
it
can
be
incorporated
in
early
stages
of
development.
This
enables
testing
of
the
different
components
or
services
separately,
whilst
the
rest
of
the
application’s
functionality
is
still
under
development.
b) Understanding
the
different
functionalities
of
the
various
components
will
make
results
analysis
much
easier.
The
reason
for
failed
transactions
will
be
identified
more
quickly
because
it
can
be
pinpointed
to
the
relevant
component,
and
won’t
require
end-‐to-‐end
analysis.
3.3 Know
your
business
requirements
Business
requirements
need
to
be
collated,
discussed
and
agreed
upon
by
all
stakeholders.
Equally
the
following
information
should
be
a
vital
part
of
designing
the
complete
test
plan
and
execution:
a) Number
of
concurrent
users.
b) The
exact
production
transaction
mix.
c) Expected
user
behaviour.
d) Production
volumes.
e) Expected
business
test
cases.
f) Expected
SLAs.
g) Required
test
inputs
and
outputs.
5.
Performance
testing
will
focus
around
peak
production
volumes
as
well
as
up
to
5
years
volume
growth
prediction.
Business
transactions
selection
will
aim
for
the
top
10-‐15
most
used
transactions
in
production.
Some
rarely
used
transactions
with
suspected
performance
impact
will
also
be
included,
in
order
to
identify
any
performance
degradation
they
might
introduce.
The
performance
engineer
will
research
non-‐functional
requirements,
usually
by
initiating
communications
with
the
following
resources:
a) Business
representatives
should
be
able
to
provide
information
on
user
behaviour,
business
test
cases
and
the
expected
business
SLAs.
The
business
representative
who,
in
most
cases
will
approach
a
technical
production
support
resource
to
extract
the
necessary
information
from
the
live
production
data
can
also
provide
the
more
‘production
specific’
information.
b) Production
support
resources
can
be
approached
directly
to
help
in
identifying
performance-‐testing
requirements.
Production
support
resources
can
extract
the
necessary
information
from
production
as
per
the
performance
engineer
specifications.
c) Capacity
planning
resources
will
be
a
good
point
of
reference
for
a
more
accurate
production
transaction’s
mix,
environment
resource
utilisation
and
in
identifying
any
possible
problematic
transactions.
3.4
Know
your
weapon
of
choice
Testing
tools
hold
a
critical
role
in
the
whole
performance
testing
design
and
should
always
be
selected
after
thorough
analysis
of
the
project’s
(and
overall
business’)
needs.
There
are
many
free
tools
on
the
market
but
each
has
their
limitations.
The
selected
tool
will
have
to
satisfy
customer’s
needs
on
numerous
levels:
a) Usability
for
and
beyond
the
current
project.
b) Ability
to
perform
the
project’s
specific
tasks.
c) Dynamic,
user
friendly
and
easy
to
use.
d) Access
to
a
large
pool
of
professional
resources.
e) Ongoing
support.
f) Purchase
and
continuous
maintenance
costs.
6.
When
multiple
performance
tools
already
exist,
tools
consolidation
should
be
recommended.
This
is
because
multiple
tools
will
require
multiple
resources
possessing
different
skills,
higher
licensing
and
maintenance
cost.
3.5
Know
your
monitoring
tools
Application
Performance
Management
(APM)
tools
such
as
AppDynamics,
DynaTrace,
Wily
Introscope
or
Foglight
(just
to
mention
a
few)
are
the
most
common
solution
for
application
monitoring
across
multiple
environments
and
multiple
technologies
(for
example
Java,
.Net,
VM)
databases.
APM
tools
are
used
to
improve
user
satisfaction.
They
perform
deep
diagnostics
for
quick
resolutions,
making
sure
requirements
are
met
and
environments
are
performing
as
expected.
Monitoring
is
essential
in
helping
performance
engineers
identify
the
root
cause
of
every
performance-‐impacting
incident.
APM
tools
can
manage
user
experience
from
multiple
perspectives,
enabling
performance
SLA
detection
and
response
time
issues
isolations.
By
capturing
real
user
transactions,
the
engineer
can
understand
how
the
application’s
design
and
configuration
are
affecting
the
overall
performance.
Monitoring
will
play
a
very
important
role
in
helping
with
the
following:
Application
Server
Monitoring
and
Diagnostics
resolve
problems
before
they
impact
users
and
violate
SLAs.
They
do
this
by
simplifying
management
of
the
application
server,
the
user
transactions
running
through
it,
and
the
underlying
infrastructure.
Database
Monitoring
and
Management
tools
provide
simplified,
consistent
performance
monitoring
and
management
across
different
database
platforms,
helping
you
reduce
administrative
costs
and
improve
service
levels.
Middleware
Monitoring
delivers
unparalleled
application
performance
by
monitoring
the
health
of
your
middleware
environment
and
resolving
incidents
before
they
become
an
issue
for
your
business.
7.
Network
System
Management
minimizes
the
wasted
time
(and
chaos)
resulting
from
sudden
network
problems
with
a
complete
visibility
of
your
network
resources;
including
hardware,
operating
systems,
virtualization,
databases,
middleware,
applications
and
services.
The
ability
to
drill
down
on
a
problematic
transaction
and
view
the
associated
code
will
save
developers
and
test
engineers
considerable
effort
during
performance
analysis.
3.6
Understand
your
test
data
The
test
data
used
in
performance
testing
could
have
a
significant
impact
on
the
validity
of
the
test,
as
well
as
the
overall
test
results.
As
such,
the
creation
of
valid
data
and
ongoing
maintenance
should
be
given
high
importance
and
planned
very
carefully.
The
test
data
should
be
thoroughly
understood
because
it
will
be
expected
to
match
different
conditions
and
rules
to
successfully
trigger
the
many
different
components
and
services
used
by
the
application.
Test
data
should
not
be
reused
over
long
periods
of
time
or
over
multiple
projects
as
this
might
cause
‘data
exhaustion’,
which
will
affect
test
results
and
might
impact
the
overall
performance
test
validity.
If
and
when
possible,
it
would
be
beneficial
to
refresh
the
test
database
before
each
test,
or
at
the
very
least
clean
up
activities
or
transactions
created
by
test
users
during
each
test.
Having
the
system
and
database
in
identical
states
before
each
test
will
ensure
results
are
similar
and
that
any
noticeable
performance
impacts
are
related
to
application
changes
or
environment
settings;
not
test
data
or
scripts,
or
test
tool
related
issues.
3.6.1
Recommended
procedures
for
test
creation
and
database
management
The
following
recommended
procedures
will
ensure
each
executed
test
is
identical
in
terms
of
the
test
data
used,
database
size
and
data
validity.
As
such
it
will
minimize
data
exhaustion.
3.6.1.1
Database
refresh
• Database
refresh
with
a
production
copy
every
6-‐12
months
(depending
on
production
database
changes,
growth
and
usage
of
the
test
environment).
8.
• Test
data
creation
after
completion
of
the
database
refresh
or
data
extraction
(if
possible).
• Regression
test,
ensuring
test
data
created
is
valid
and
sufficient
for
the
expected
test
efforts.
• Complete
test
database
backup.
This
back
up
will
be
used
for
continuous
database
refreshes
as
needed:
o Before
each
test
executed.
o Once
every
agreed
time
(once
a
week,
once
a
month),
dependant
on
availability
and
environment
usage.
3.6.1.2
Test
data
clean-‐up
If
exercising
the
above
is
not
possible
due
to
issues
with
database
size,
resource
availability
or
environment
complexities,
the
following
steps
could
be
considered:
• Large
amounts
of
data
need
to
be
created
to
cover
the
duration
of
the
test
executions.
• User
transactions
created
during
the
test
need
to
be
reverted,
either
by
creating
frontend
or
backend
scripts
(if
possible).
• Test
data
should
be
replaced
before
each
test
execution.
3.6.2
Recommended
procedures
for
data
creation
are:
SQL
scripts
should
be
used
for
data
extraction
directly
from
the
newly
refreshed
database.
The
test
engineer
will
identify
the
requested
test
data
format
in
order
to
help
with
the
data
extraction.
This
data
creation
option
will
be
the
most
efficient
way
of
ensuring
valid
data
is
used
without
actually
generating
additional
large
amounts
of
test
data,
which
would
increase
the
database
size
and
in
some
cases
might
slightly
impact
overall
performance.
Using
SQL
scripts
to
create
test
data
directly
to
the
database.
The
test
engineer
will
identify
the
exact
test
data
requirements
n
order
to
help
with
the
SQL
scripts
development.
This
data
creation
option
will
be
the
fastest
and
most
reliable
way
of
creating
large
amounts
of
test
data
without
stressing
the
applications
or
any
other
services.
9.
Test
engineers
should
create
utility
scripts
to
be
executed,
using
the
application’s
frontend.
Using
the
frontend
is
the
most
common
method
used
for
data
creation.
It
might
be
the
most
time
and
resources-‐consuming
option,
but
it
is
also
the
quickest,
most
efficient
method
for
the
test
engineer.
This
is
because
when
using
the
frontend,
the
test
engineer
does
not
rely
on
the
availability
of
other
resources
for
scripts
creation
and
execution.
As
long
as
the
environment
is
available,
the
test
engineer
can
develop
and
run
the
utility
scripts
to
create
all
necessary
test
data.
4. Test
Scripts
and
Scenarios
Development
Test
scripts
should
be
developed
only
after
all
the
above
points
are
established
and
understood.
The
test
engineer
will
have
to
understand
the
exact
business
requirement
and
transactions
flow
to
record
and
develop
a
valid
script.
Ideally,
the
scripting
procedure
will
take
place
in
the
performance
testing
environment,
with
the
latest
stable
code
available,
to
avoid
rescripting
and
multiple
script
modifications.
In
reality
it
will
probably
be
efficient
for
un-‐official
scripting
to
be
conducted
on
lower
environments
such
as
UAT
or
even
SYS
if
the
code
version
is
stable
enough,
which
will
give
the
test
engineer
enough
time
to
cover
the
application’s
behaviour
knowledge
and
reduce
pressure
from
the
official
performance
testing
schedule.
Taking
the
initiative
and
engaging
in
the
process
as
early
as
possible
will
benefit
all
parties
involved.
The
scripts
will
have
to
follow
the
basic
standards
and
processes:
a) Include
script
descriptions
and
relevant
steps/actions.
b) Include
descriptions
of
data
type
usage.
c) Follow
the
standardized
transactions
format
and
naming
convention.
d) Reuse
actions/functions
if
possible.
e) Keep
scripts
simple.
This
ensures
any
issues
are
due
to
an
application’s
problems,
not
script
complexity.
f) Enter
comments
and
logic
descriptions
for
every
request.
This
enables
other
team
members
to
understand
your
scripts.
10.
Test
scenarios
should
be
designed
to
follow
business
requirements,
production
volumes
and
the
expected
transaction
mix
requirements
established
in
the
planning
stage.
When
designing
test
scenarios,
it
is
important
to
understand
that
we
are
trying
to
achieve
a
successful
run
to
identify
performance
issues.
As
such,
there
is
no
need
to
overload
the
environment
at
the
beginning
of
the
test.
Ideally
we
would
ramp
up
user
concurrency
and
volumes
for
the
first
15–30
minutes
(depending
on
volumes),
maintaining
the
full
load
for
45-‐90
minutes
before
starting
the
ramp
down.
Please
note
this
is
a
sample.
Different
tests
will
require
different
designs.
5. Test
Communication
Procedures
In
most
cases
environment
resources
are
shared
amongst
different
test
environments,
different
test
teams
and
even
within
production.
For
this
reason
maintaining
continuous
communication
with
the
different
stakeholders
is
very
important
and
highly
recommended.
If
possible,
a
team
track
should
be
created
and
sent
for
approval
before
each
test.
The
team
track
will
include
all
the
stakeholders
(database
administrators,
middleware
and
production
support,
UAT
testers,
project
managers,
developers
and
business
representatives)
that
might
be
impacted
by
the
test
execution.
A
test
notification
email
will
need
to
be
sent
before
each
test
to
all
stakeholders,
informing
them
about
the
following:
a) Test
execution
times.
b) Test
duration.
c) Test
purpose
and
objectives.
d) The
applications
and
environments
that
will
be
affected.
The
notification
email
should
be
sent
in
reasonable
time
before
each
test,
allowing
the
recipients
to
raise
any
concerns
or
suggestions
in
time
for
the
test.
6. Test
Reports
and
Documentations
When
establishing
test
reports
it
is
critical
to
identify
who
the
recipients
are.
Likewise,
it’s
important
to
send
your
test
results
to
both
technical
and
business
representatives.
The
results
should
be
sent
as
soon
as
possible
after
each
test,
allowing
the
various
stakeholders
to
understand
and
comment
on
the
test
results.
11.
Most
tools
now
have
the
capability
to
create
some
sort
of
a
report,
which
should
include
test
transaction
response
times,
volumes
and
concurrency
and
overall
resources
utilisation.
Often
however,
those
reports
will
include
too
much
technical
information,
which
will
not
be
relevant
to
all
stakeholders,
in
particular,
the
business
stakeholders.
The
recommended
solution
for
this
overload
is
to
separate
the
level
of
detail
for
the
two
groups.
The
communication
for
non
technical
recipients
could
be
in
an
e-‐mail
format
and
would
include
the
following
to
help
them
understand
the
high
level
outcomes
of
the
test:
a) Test
objectives,
duration
and
execution
time.
b) High-‐level
findings
(was
the
test
successful
or
not?)
c) Performance
degradation
(i.e.
show
slow
transaction
response
times).
d) Observations.
e) Conclusions.
f) The
next
planned
steps.
Technical
stakeholders
will
need
additional
information
on
top
of
this
communication,
which
could
be
displayed
in
an
excel
spreadsheet
attached
to
the
test
results
e-‐mail.
Consider
the
following
as
part
of
the
technical
report:
a) Display
only
the
business
transactions
in
focus.
b) Display
min,
average,
max
and
90
percentile
for
each
transaction.
c) Compare
results
for
the
same
transactions
between
the
established
baseline
and
the
current
test.
d) The
baseline
and
test
comparison
will
need
to
highlight
differences
in
events
(past
transactions),
average
response
times
and
the
90
percentile.
e) Include
monitoring
graphs
to
back
up
your
findings.
f) Calculate
overall
throughput.
g) Display
number
of
concurrent
users
used.
h) Display
any
error
messages
received
during
the
test.
i) Display
any
available
utilization
graphs.
This
type
of
reporting
will
take
place
after
each
individual
test.
12.
The
final
report
would
be
a
“Test
Finding”
report,
which
would
be
created
at
the
end
of
the
performance
test
exercise.
The
Test
Finding
report
should
summarise
all
the
test
efforts,
findings,
conclusions
and
suggestions
and
should
include
the
following
information:
a) Summaries
of
the
overall
test
efforts
(test
scripts
used,
execution
start
and
end
times,
type
of
test
executed,
applications
tested).
b) Issues
identified
and
fixes
applied
during
the
test.
c) The
performance
status
at
the
end
of
testing.
d) Expected
performance
impacts
on
production.
e) Overall
test
conclusions.
f) Final
comments
and
improvement
suggestions,
if
applicable.
7. Conclusion
This
paper
highlights
the
best
practice
processes
used
and
refined
by
independent
diagnostic
experts,
Ecetera
and
has
been
put
together
in
response
to
what
we’ve
noted
are
often
overlooked
activities
in
the
Performance
Monitoring
field.
Equally,
Performance
Testing
itself
forms
one
part
of
a
much
wider
Performance
Engineering
framework
and
is
rarely
considered
in
isolation.
To
receive
a
Performance
Engineering
consultation
for
your
organisation,
contact
Ecetera
on
(02)
8278
7068
or
email
enquiries@ecetera.com.au.