SlideShare a Scribd company logo
1 of 49
Download to read offline
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
1
Performance measurements
according to ITU-T Q.3930
and
considerations in benchmarking
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
2
“If you can’t measure it,
you can’t improve it”
- Lord Kelvin
Why are we here today?
We usually end our presentations with this statement by Lord Kelvin
“If you can’t measure it,
you can’t compare it”
- Michael Mild
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
3
Why is Benchmarking
more and more requested?
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
4
Benchmarking at Telecoms (Singtel today), Singapore 1984
Why is benchmarking more and more requested?
Benchmarking of IT systems has come far during the 35 years since this happened!
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
5
In the connected world excellent performance is a necessity!
Performance is the most visible asset of a product or service, since it is
experienced in every interaction!
Products are judged by their performance in the connected world.
Why is benchmarking more and more requested?
Products with best performance are always challenged by competitors,
i.e. a product cannot rest on old performance merits.
This is where benchmarking begins!
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
6
Background to
ITU-T Recommendation Q.3930
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
7
Background to ITU-T recommendation Q.3930
It all started with the need for a generally accepted terminology!
If you ask ten people what performance testing is about you are likely to get at least
five different answers.
In many cases terminology in performance testing reflect how test tools are used
rather than what system characteristics are actually measured.
An example:
Is it Load tests, or is it Capacity measurements / Responsiveness measurements?
The purpose of ITU-T recommendation Q.3930 is to create a standard for
terminology and concepts in performance testing where we all talk the same
language.
The terminology in Q.3930 is used in this presentation.
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
8
Structure of
ITU-T Recommendation Q.3930
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
9
The structure of ITU-T recommendation Q.3930
The document covers most aspects of performance measurements
The document contains a set of terminology and description of concepts in
performance testing to establish a common base for discussions about
performance and performance tests in the following sections:
• Section 6: Categories of performance characteristics.
• Section 7: Measured objects and service characteristics.
• Section 8: Requirements on metrics and collected data.
• Section 9: Abstract performance metrics
• Section 10: Performance data processing
• Section 11: General performance test concepts
• Section 12: Performance test environment
• Section 13: Performance test specifications
• Section 14: Workload definitions
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
10
Categories of
Performance Characteristics
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
11
Categories of Performance Characteristics
The standard Q.3930 introduces three categories of performance metrics
A system’s performance can be described by an endless number of metrics.
To enable more focused performance characteristics we introduced three categories:
Powerfulness
Characteristics
Reliability
Characteristics
Efficiency
Characteristics
The performance categories are:
1. Powerfulness characteristics
2. Reliability characteristics
3. Efficiency characteristics
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
12
Categories of Performance Characteristics
Powerfulness characteristics
The Powerfulness category contains metrics describing the delivery limits of
a measured system, i.e. metrics for how much services can be handled and/or
how fast produced services can be delivered.
Powerfulness
Characteristics
Reliability
Characteristics
Efficiency
Characteristics
Powerfulness characteristics has three groups of metrics:
1. Capacity metrics
2. Responsiveness metrics
3. Scalability metrics
Note! Metrics in the powerfulness category are always relative to the used platform!
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
Reliability characteristics has five groups of metrics:
1. Stability metrics
2. Availability metrics
3. Robustness metrics
4. Recovery metrics
5. Correctness metrics
13
Categories of Performance Characteristics
Powerfulness
Characteristics
Reliability
Characteristics
Efficiency
Characteristics
Reliability characteristics
The Reliability category contains metrics describing the conditional limits of
a measured system, i.e. metrics for maintained service levels.
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
14
Categories of Performance Characteristics
Powerfulness
Characteristics
Reliability
Characteristics
Efficiency
Characteristics
Efficiency characteristics has seven groups of metrics:
1. Service resource usage
2. Service resource linearity
3. Service resource scalability
4. Service resource bottlenecks
5. Platform resource utilization
6. Platform resource distribution
7. Platform resource scalability
Efficiency characteristics
The Efficiency category contains metrics describing the productivity limits of
a measured system, i.e. metrics for required efforts of produced services.
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
15
Requirements on
Metrics and Collected Data
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
16
General objectives for performance metrics
Requirements on Metrics and Collected Data
Good performance metrics must comply to a set of requirements:
1. Understandable, i.e. are the metrics easy to interpret?
2. Reliable, i.e. are produce values always in accordance with real values?
3. Accurate, i.e. are produced values precise or very close to real values?
4. Repeatable, i.e. are produced measurements values possible to repeat?
5. Linear, i.e. are produced figures proportional to the changes in the tested object?
6. Consistent, i.e. are metric units and their definition the same on tested platforms?
7. Computable, i.e. are metric units and definitions, measurement accuracy, and
measurement methods precise enough to enable computations?
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
17
Trusted
Measurement
Value(s)
Metric identifiers
- What …….
- Where …..
- When ……
Metric formats
- Value type
- Unit
- Accuracy (+/- %)
Measurement conditions
- How …….
Metric types
- Raw data
- Normalized data
- Transformed data
- Derived data
Four groups of attributes describing collected performance data are required to make
performance characteristics trusted and comparable.
What makes performance figures trusted and comparable?
Requirements on Performance Data
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
18
1. Measurement conditions
Measurement conditions describe what must apply when data are captured.
Measurement conditions
- How …….
Trusted
Measurement
Value(s)
There are two kinds of captured performance data:
1. Data that is, or is part of the requested performance
characteristics
2. Data that shows actual status of required conditions
when requested performance data were collected!
Recorded data about actual conditions during the performance measurements are
extremely important for the validity of captured performance data.
If you can’t verify that presented performance data were collected under requested
conditions they can’t be trusted and are consequently worthless!
Requirements on Performance Data
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
Trusted
Measurement
Value(s)
19
2. Metric identifiers
The need for metric identifiers applies both for requested performance data and
required condition measurement data.
Without metric identifiers performance data don’t represent anything!
Collected performance data must have identifiers for :
1. What they represent
2. Where they were captured (logical or physical
locations).
3. When they were captured (during the performance
measurement).
Metric identifiers
- What …….
- Where …..
- When ……
Metric identifiers provide basic information about collected performance data.
Requirements on Performance Data
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
20
3. Metric types
Metric types are required to make performance figures comparable.
Without metrics types performance characteristics can’t be compared!
Performance data belong to one of four type:
1. Raw data, such as response time.
2. Normalized data, such as transactions per second.
3. Transformed data, such as memory usage in Mbyte
transformed to memory usage as percentage of total
memory.
4. Derived data, performance characteristics resulting
from a well defined computation.
Metric types provide information about the kind of performance data.
Trusted
Measurement
Value(s)
Metric types
- Raw data
- Normalized data
- Transformed data
- Derived data
Requirements on Performance Data
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
21
4. Metric formats
Metric formats are required to make performance figures comparable, such as
miles per hour and kilometers per hour.
Without metrics formats performance characteristics can’t be compared !
Performance data has three format attributes:
1. Value type data (time, counters, size, etc.)
2. Unit data (For example time: Hour, sec, mSec)
3. Accuracy data (the precision or quality of data)
Metric formats provide information about value type, scale, and precision.
Trusted
Measurement
Value(s)
Metric formats
- Value type
- Unit
- Accuracy (+/- %)
Requirements on Performance Data
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
22
Measured Objects
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
23
Measured objects
IMS is a good example of a distributed service
P-CSCF
AS OSA-SCS IM-SSF
BGCFMGCF MRF-CSGW
MGW MRF-P
UE
The IMS media plane
The IMS signalling plane
I-CSCF S-CSCF P-CSCF UE
Application services
AAA services
Control services
Cx
Dx ShSh
Gm Mw GmMw Mw
Mg
Mj
MrMi
ISC
Ut
IP
Mn Mp
RTP
SIP
XCAP
Diameter
HSS
SLF
IMS has a number of defined services interconnected by a set of protocols.
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
24
Measured objects
What is actually measured?
From performance characteristics of single IMS components
P-CSCF
AS OSA-SCS IM-SSF
BGCFMGCF MRF-CSGW
MGW MRF-P
UE
The IMS media plane
The IMS signalling plane
I-CSCF S-CSCF P-CSCF UE
Application services
AAA services
Control services
Cx
Dx ShSh
Gm Mw GmMw Mw
Mg
Mj
MrMi
ISC
Ut
IP
Mn Mp
RTP
HSS
SLF
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
25
Measured objects
What is actually measured?
P-CSCFUE Gm UEMw
Performance test tool
Measured component
From performance characteristics of single IMS components
SIP
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
26
Measured objects
What is actually measured?
P-CSCF
AS OSA-SCS IM-SSF
BGCFMGCF MRF-CSGW
MGW MRF-P
UE
The IMS media plane
The IMS signalling plane
I-CSCF S-CSCF P-CSCF UE
Application services
AAA services
Control services
Cx
Dx ShSh
Gm Mw GmMw Mw
Mg
Mj
MrMi
ISC
Ut
IP
Mn Mp
RTP
HSS
SLF
To performance characteristics of IMS services
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
27
Measured objects
What is actually measured?
P-CSCFUE I-CSCF S-CSCF P-CSCF UEGm Mw GmMw Mw
Performance test tool
Measured Service:
A call setup
To performance characteristics of IMS services
SIP
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
28
Measured objects
Service characteristics
Provided services can be very different, here are two examples:
Service initiation characteristics
UE
IMS
Publication
Server
UE
Pulled
service
Publish
Pushed
service
Notify
Service duration characteristics time
VARIABLE
REGISTRATION
SUBSCRIPTION
NOTIFICATION
CALL
LONG
LONG
SHORT
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
29
Measured objects
Service design
SINGLE STEP SERVICE
time
UE P-CSCF
1
200
SUB
time
UE P-CSCF
1
401
REG unauth
2
200
REG auth
3
200
SUB
time
UE P-CSCF
1
183
INV
2
200
PRA
3
200
UPD
180
200
4 ACK
MRFP
codec
codec
codec
codec
5
200
BYE
MULTI STEP SERVICE COMPOSITE SERVICE
Provided services can have different designs, here are three examples:
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
30
Measured objects
Service interfaces
SUT
Test Tool
API
Communication
Protocol
SUT
Test Tool
Communication
Protocol
Provided services can be provided with different interfaces, here are two examples:
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
31
Performance Test Environment
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
32
Test site and test beds
Performance Test Environment
Test Site Infrastructure
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test Site Server
Test
bed
2
SUT 2Test System 2
Test
bed
1
SUT 1Test System 1
Test
bed
3
SUT 3Test System 3
A test site is a collection of hardware installed for performance measurements.
Test beds are how different SUT:s utilize the test site.
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
33
Layers in a System Under Test (SUT)
Performance Test Environment
SUT
Tested
ComponentsApplication
Supporting
Components
(SC)
Middleware
Operating system
Hardware
An application is the target for performance measurements.
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
34
A test bed with service requesting and service responding tools
Performance Test Environment
SUT
Service
Responding
Tool
Service
Requesting
Tool
A SUT can be surrounded by test tools playing different roles.
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
35
Example of front-end border and back-end border
Performance Test Environment
Back End
SIP / Mw
Front End
SIP / Gm
SUT
IMS /
P-CSCF
Service
Requesting
Tool
Service
Responding
Tool
<
A test tool has one or more borders to a System Under Test.
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
36
Example of a SUT with simulated components (HSS)
Performance Test Environment
a.
SUT
HSS
S-CSCFP-CSCFSimulated
UEs
Diameter/ Cx
SIP
Mw
SIP
Gm
b.
SUT
HSS
S-CSCFP-CSCFSimulated
UEs
Diameter/ Cx
SIP
Mw
SIP
Gm
Components of a System Under Test can be replaced by test tools.
In this case an HSS.
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
37
Monitoring performance tests
Performance Test Environment
TS (Test System)TS (Test System) SUT
Internal
Performance
Data
TS
Service
Responder
TS
Service
Requestor Probes
External
Performance
Data
External
Performance
Data
Performance Test Monitoring Tools
On-line
user Alerts
On-line
user
On-line
user Logging
A performance test tool shall support monitoring of an ongoing test.
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
38
General Performance
Test Concepts
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
39
Delivery
system
Load Simulation Monitoring
When: Development Pre Deployment Post Deployment
Why: Measure limits Verify requirements Confirm requirements
What: Isolated services Complex traffic Proactive monitoring
. Critical services .
Development
system
Production
system
General Performance Test Concepts
Performance measurements are required during the whole life cycle
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
40
Recording of external and internal performance data
TS (Test System)TS (Test System) SUT
Internal
Performance
Data Recorder
TS
Service
Responder
TS
Service
Requestor Probes
External
Performance
Data Recorder
External
Performance
Data Recorder
There are two types of performance data:
1. What is captured externally or outside the System Under Test (SUT).
2. What is captured internally or inside the SUT (with probes or profiling tools).
General Performance Test Concepts
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
41
Requirements on
Performance Test Specifications
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
42
A performance test specification has many elements
Performance test specification elements
Test objectives
Test conditions
Test configurations
Test data specifications
Test evaluation specifications
Requirements on Performance Test Specifications
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
43
The core of a test case is the service scenarios to be used
A performance test case requires working services to be tested.
UA SIDE A UA-SIDE-B
==================== ====================
| |
INVITE ------------------------------|
|---------------------------- 100 INV
|---------------------------- 180 INV
PRACK ------------------------------|
|---------------------------- 200 PRA
|---------------------------- 180 INV
PRACK ------------------------------|
|---------------------------- 200 PRA
| |
| Ring timer 15 sec.
| |
|---------------------------- 200 INV
ACK -------------------------------|
| |
Hold timer 180 sec. |
| |
BYE -------------------------------|
|---------------------------- 200 BYE
| |
Exit timer 4 sec. |
| |
End
Requirements on Performance Test Specifications
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
44
Workload Concepts
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
45
Workload Concepts
The description of what the measured system shall produce
A workload has three parts, each with many variations:
1. The mix of requested services and the weight of each service.
2. The amount of each requested service (can be described in several ways).
3. The time distribution of requested services, or the service request load.
The service request load can be defined in two ways:
1. User session driven load (suited for web performance measurements).
2. Traffic rate driven load (suited for telecom performance measurements).
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
46
Performance Data
Processing
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
47
Performance data processing has many steps
Performance Data Processing
Measured Object
Measurement values
time
V1 V2 V3 V4 V5 V6 V7 V8
Collected measurement data are usually time series of measurement values.
Different measurement values are synchronized by timestamps.
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
48
Performance Measurement data flow
TEST JOB KPI
CALCS.
KPI
ANALYSIS
Analysed
KPI
KPI
EVALUATION
Eval.
KPI
DATA
VALIDATION
DATA
SELECTION
DATA
CAPTURE
Raw
Data
Selected
Variables
KPI Eval.
Config.
KPI Analysis
Config.
KPI Formula
Config.
Test Cond.
Config.
Control
Data
Valid
Data
KPI
Measurem.
Config.
Performance data processing has many steps: Here for KPI production
© Copyright 2019,
SoftWell Performance AB
Performance measurements according to ITU-T Q.3930
and considerations in benchmarking
49
“To measure is to know”
- Lord Kelvin
“To benchmark is to know”
- Michael Mild
I end this presentation with another statement by Lord Kelvin

More Related Content

Similar to Michael milds Performance Measurements according to ITU-T Q.3930 and considerations in Benchmarking

iManager PRS V100R009 Product Description.pdf
iManager PRS V100R009 Product Description.pdfiManager PRS V100R009 Product Description.pdf
iManager PRS V100R009 Product Description.pdf
Sokrates5
 
Basics of performance measurement in umts
Basics of performance measurement in umtsBasics of performance measurement in umts
Basics of performance measurement in umts
Ekwere Udoh
 
360 cellutions casestudy
360 cellutions casestudy360 cellutions casestudy
360 cellutions casestudy
360cell
 

Similar to Michael milds Performance Measurements according to ITU-T Q.3930 and considerations in Benchmarking (20)

iManager PRS V100R009 Product Description.pdf
iManager PRS V100R009 Product Description.pdfiManager PRS V100R009 Product Description.pdf
iManager PRS V100R009 Product Description.pdf
 
Application Value Assessment
Application Value AssessmentApplication Value Assessment
Application Value Assessment
 
An Oversight or a New Customer Phenomenon, Getting the Most of your Contact C...
An Oversight or a New Customer Phenomenon, Getting the Most of your Contact C...An Oversight or a New Customer Phenomenon, Getting the Most of your Contact C...
An Oversight or a New Customer Phenomenon, Getting the Most of your Contact C...
 
Cobit overview
Cobit overviewCobit overview
Cobit overview
 
Basics of performance measurement in umts
Basics of performance measurement in umtsBasics of performance measurement in umts
Basics of performance measurement in umts
 
Auditable Financial System for Government Contracting at Accenture Federal Se...
Auditable Financial System for Government Contracting at Accenture Federal Se...Auditable Financial System for Government Contracting at Accenture Federal Se...
Auditable Financial System for Government Contracting at Accenture Federal Se...
 
ITIL 2011 Foundation Overview
ITIL 2011 Foundation OverviewITIL 2011 Foundation Overview
ITIL 2011 Foundation Overview
 
231267550 bts3900-v100 r008c00spc220-e-nodeb-performance-counter-reference
231267550 bts3900-v100 r008c00spc220-e-nodeb-performance-counter-reference231267550 bts3900-v100 r008c00spc220-e-nodeb-performance-counter-reference
231267550 bts3900-v100 r008c00spc220-e-nodeb-performance-counter-reference
 
IRJET - Hardware Benchmarking Application
IRJET - Hardware Benchmarking ApplicationIRJET - Hardware Benchmarking Application
IRJET - Hardware Benchmarking Application
 
ROTH Fall Conference
ROTH Fall ConferenceROTH Fall Conference
ROTH Fall Conference
 
UCaaS/Hybrid RFP & Review IP Telephony and Unified Communications System
UCaaS/Hybrid RFP & Review IP Telephony and Unified Communications SystemUCaaS/Hybrid RFP & Review IP Telephony and Unified Communications System
UCaaS/Hybrid RFP & Review IP Telephony and Unified Communications System
 
IRJET- Implementation of Business Ease Android Application for Akhil Bhar...
IRJET-  	  Implementation of Business Ease Android Application for Akhil Bhar...IRJET-  	  Implementation of Business Ease Android Application for Akhil Bhar...
IRJET- Implementation of Business Ease Android Application for Akhil Bhar...
 
Managed Services and Outsourcing in Telecoms
Managed Services and Outsourcing in TelecomsManaged Services and Outsourcing in Telecoms
Managed Services and Outsourcing in Telecoms
 
Case Study - Cigniti's Performance Testing Solutions Helps Reduce Overall Tes...
Case Study - Cigniti's Performance Testing Solutions Helps Reduce Overall Tes...Case Study - Cigniti's Performance Testing Solutions Helps Reduce Overall Tes...
Case Study - Cigniti's Performance Testing Solutions Helps Reduce Overall Tes...
 
Cross-Validation Rules: Tips to Optimize your GL
Cross-Validation Rules: Tips to Optimize your GLCross-Validation Rules: Tips to Optimize your GL
Cross-Validation Rules: Tips to Optimize your GL
 
Performance Testing
Performance Testing Performance Testing
Performance Testing
 
360 cellutions casestudy
360 cellutions casestudy360 cellutions casestudy
360 cellutions casestudy
 
Case Study - Automation Testing Helps Leading Public Pay-Media Company Reduce...
Case Study - Automation Testing Helps Leading Public Pay-Media Company Reduce...Case Study - Automation Testing Helps Leading Public Pay-Media Company Reduce...
Case Study - Automation Testing Helps Leading Public Pay-Media Company Reduce...
 
Drive Business Excellence with Outcomes-Based Contracting: The OBC Toolkit
Drive Business Excellence with Outcomes-Based Contracting: The OBC ToolkitDrive Business Excellence with Outcomes-Based Contracting: The OBC Toolkit
Drive Business Excellence with Outcomes-Based Contracting: The OBC Toolkit
 
Davide zari asset management support tools selection example
Davide zari   asset management support tools selection exampleDavide zari   asset management support tools selection example
Davide zari asset management support tools selection example
 

More from AlexMinov

Andrea scaloni
Andrea scaloniAndrea scaloni
Andrea scaloni
AlexMinov
 

More from AlexMinov (11)

Leonid Semakov presentation Enhanced Network Performance Evaluation
Leonid Semakov presentation Enhanced Network Performance EvaluationLeonid Semakov presentation Enhanced Network Performance Evaluation
Leonid Semakov presentation Enhanced Network Performance Evaluation
 
Rustam Pirmagomedov
Rustam PirmagomedovRustam Pirmagomedov
Rustam Pirmagomedov
 
Communications Regulatory Agency, Bosnia and Herzegovina
Communications Regulatory Agency, Bosnia and HerzegovinaCommunications Regulatory Agency, Bosnia and Herzegovina
Communications Regulatory Agency, Bosnia and Herzegovina
 
Mick fox abstract
Mick fox abstractMick fox abstract
Mick fox abstract
 
Michael mild abstract
Michael mild abstractMichael mild abstract
Michael mild abstract
 
Martin brand presentation
Martin brand presentationMartin brand presentation
Martin brand presentation
 
Martin brand abstract_s2
Martin brand abstract_s2Martin brand abstract_s2
Martin brand abstract_s2
 
Martin brand abstract_s1
Martin brand abstract_s1Martin brand abstract_s1
Martin brand abstract_s1
 
BERECs Network Neutrality Measurement Methodology, and how it supports the EU...
BERECs Network Neutrality Measurement Methodology, and how it supports the EU...BERECs Network Neutrality Measurement Methodology, and how it supports the EU...
BERECs Network Neutrality Measurement Methodology, and how it supports the EU...
 
Leonid semakov abstract
Leonid semakov abstractLeonid semakov abstract
Leonid semakov abstract
 
Andrea scaloni
Andrea scaloniAndrea scaloni
Andrea scaloni
 

Recently uploaded

Hyatt driving innovation and exceptional customer experiences with FIDO passw...
Hyatt driving innovation and exceptional customer experiences with FIDO passw...Hyatt driving innovation and exceptional customer experiences with FIDO passw...
Hyatt driving innovation and exceptional customer experiences with FIDO passw...
FIDO Alliance
 
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptxHarnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
FIDO Alliance
 
Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...
Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...
Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...
panagenda
 
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc
 

Recently uploaded (20)

Human Expert Website Manual WCAG 2.0 2.1 2.2 Audit - Digital Accessibility Au...
Human Expert Website Manual WCAG 2.0 2.1 2.2 Audit - Digital Accessibility Au...Human Expert Website Manual WCAG 2.0 2.1 2.2 Audit - Digital Accessibility Au...
Human Expert Website Manual WCAG 2.0 2.1 2.2 Audit - Digital Accessibility Au...
 
Oauth 2.0 Introduction and Flows with MuleSoft
Oauth 2.0 Introduction and Flows with MuleSoftOauth 2.0 Introduction and Flows with MuleSoft
Oauth 2.0 Introduction and Flows with MuleSoft
 
JavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate GuideJavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate Guide
 
Working together SRE & Platform Engineering
Working together SRE & Platform EngineeringWorking together SRE & Platform Engineering
Working together SRE & Platform Engineering
 
Extensible Python: Robustness through Addition - PyCon 2024
Extensible Python: Robustness through Addition - PyCon 2024Extensible Python: Robustness through Addition - PyCon 2024
Extensible Python: Robustness through Addition - PyCon 2024
 
Hyatt driving innovation and exceptional customer experiences with FIDO passw...
Hyatt driving innovation and exceptional customer experiences with FIDO passw...Hyatt driving innovation and exceptional customer experiences with FIDO passw...
Hyatt driving innovation and exceptional customer experiences with FIDO passw...
 
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptxHarnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
 
Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...
Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...
Event-Driven Architecture Masterclass: Integrating Distributed Data Stores Ac...
 
Using IESVE for Room Loads Analysis - UK & Ireland
Using IESVE for Room Loads Analysis - UK & IrelandUsing IESVE for Room Loads Analysis - UK & Ireland
Using IESVE for Room Loads Analysis - UK & Ireland
 
Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...
Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...
Easier, Faster, and More Powerful – Alles Neu macht der Mai -Wir durchleuchte...
 
Top 10 CodeIgniter Development Companies
Top 10 CodeIgniter Development CompaniesTop 10 CodeIgniter Development Companies
Top 10 CodeIgniter Development Companies
 
TEST BANK For, Information Technology Project Management 9th Edition Kathy Sc...
TEST BANK For, Information Technology Project Management 9th Edition Kathy Sc...TEST BANK For, Information Technology Project Management 9th Edition Kathy Sc...
TEST BANK For, Information Technology Project Management 9th Edition Kathy Sc...
 
Syngulon - Selection technology May 2024.pdf
Syngulon - Selection technology May 2024.pdfSyngulon - Selection technology May 2024.pdf
Syngulon - Selection technology May 2024.pdf
 
Portal Kombat : extension du réseau de propagande russe
Portal Kombat : extension du réseau de propagande russePortal Kombat : extension du réseau de propagande russe
Portal Kombat : extension du réseau de propagande russe
 
Long journey of Ruby Standard library at RubyKaigi 2024
Long journey of Ruby Standard library at RubyKaigi 2024Long journey of Ruby Standard library at RubyKaigi 2024
Long journey of Ruby Standard library at RubyKaigi 2024
 
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
 
WebAssembly is Key to Better LLM Performance
WebAssembly is Key to Better LLM PerformanceWebAssembly is Key to Better LLM Performance
WebAssembly is Key to Better LLM Performance
 
(Explainable) Data-Centric AI: what are you explaininhg, and to whom?
(Explainable) Data-Centric AI: what are you explaininhg, and to whom?(Explainable) Data-Centric AI: what are you explaininhg, and to whom?
(Explainable) Data-Centric AI: what are you explaininhg, and to whom?
 
The Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightThe Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and Insight
 
Microsoft CSP Briefing Pre-Engagement - Questionnaire
Microsoft CSP Briefing Pre-Engagement - QuestionnaireMicrosoft CSP Briefing Pre-Engagement - Questionnaire
Microsoft CSP Briefing Pre-Engagement - Questionnaire
 

Michael milds Performance Measurements according to ITU-T Q.3930 and considerations in Benchmarking

  • 1. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 1 Performance measurements according to ITU-T Q.3930 and considerations in benchmarking
  • 2. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 2 “If you can’t measure it, you can’t improve it” - Lord Kelvin Why are we here today? We usually end our presentations with this statement by Lord Kelvin “If you can’t measure it, you can’t compare it” - Michael Mild
  • 3. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 3 Why is Benchmarking more and more requested?
  • 4. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 4 Benchmarking at Telecoms (Singtel today), Singapore 1984 Why is benchmarking more and more requested? Benchmarking of IT systems has come far during the 35 years since this happened!
  • 5. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 5 In the connected world excellent performance is a necessity! Performance is the most visible asset of a product or service, since it is experienced in every interaction! Products are judged by their performance in the connected world. Why is benchmarking more and more requested? Products with best performance are always challenged by competitors, i.e. a product cannot rest on old performance merits. This is where benchmarking begins!
  • 6. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 6 Background to ITU-T Recommendation Q.3930
  • 7. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 7 Background to ITU-T recommendation Q.3930 It all started with the need for a generally accepted terminology! If you ask ten people what performance testing is about you are likely to get at least five different answers. In many cases terminology in performance testing reflect how test tools are used rather than what system characteristics are actually measured. An example: Is it Load tests, or is it Capacity measurements / Responsiveness measurements? The purpose of ITU-T recommendation Q.3930 is to create a standard for terminology and concepts in performance testing where we all talk the same language. The terminology in Q.3930 is used in this presentation.
  • 8. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 8 Structure of ITU-T Recommendation Q.3930
  • 9. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 9 The structure of ITU-T recommendation Q.3930 The document covers most aspects of performance measurements The document contains a set of terminology and description of concepts in performance testing to establish a common base for discussions about performance and performance tests in the following sections: • Section 6: Categories of performance characteristics. • Section 7: Measured objects and service characteristics. • Section 8: Requirements on metrics and collected data. • Section 9: Abstract performance metrics • Section 10: Performance data processing • Section 11: General performance test concepts • Section 12: Performance test environment • Section 13: Performance test specifications • Section 14: Workload definitions
  • 10. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 10 Categories of Performance Characteristics
  • 11. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 11 Categories of Performance Characteristics The standard Q.3930 introduces three categories of performance metrics A system’s performance can be described by an endless number of metrics. To enable more focused performance characteristics we introduced three categories: Powerfulness Characteristics Reliability Characteristics Efficiency Characteristics The performance categories are: 1. Powerfulness characteristics 2. Reliability characteristics 3. Efficiency characteristics
  • 12. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 12 Categories of Performance Characteristics Powerfulness characteristics The Powerfulness category contains metrics describing the delivery limits of a measured system, i.e. metrics for how much services can be handled and/or how fast produced services can be delivered. Powerfulness Characteristics Reliability Characteristics Efficiency Characteristics Powerfulness characteristics has three groups of metrics: 1. Capacity metrics 2. Responsiveness metrics 3. Scalability metrics Note! Metrics in the powerfulness category are always relative to the used platform!
  • 13. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking Reliability characteristics has five groups of metrics: 1. Stability metrics 2. Availability metrics 3. Robustness metrics 4. Recovery metrics 5. Correctness metrics 13 Categories of Performance Characteristics Powerfulness Characteristics Reliability Characteristics Efficiency Characteristics Reliability characteristics The Reliability category contains metrics describing the conditional limits of a measured system, i.e. metrics for maintained service levels.
  • 14. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 14 Categories of Performance Characteristics Powerfulness Characteristics Reliability Characteristics Efficiency Characteristics Efficiency characteristics has seven groups of metrics: 1. Service resource usage 2. Service resource linearity 3. Service resource scalability 4. Service resource bottlenecks 5. Platform resource utilization 6. Platform resource distribution 7. Platform resource scalability Efficiency characteristics The Efficiency category contains metrics describing the productivity limits of a measured system, i.e. metrics for required efforts of produced services.
  • 15. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 15 Requirements on Metrics and Collected Data
  • 16. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 16 General objectives for performance metrics Requirements on Metrics and Collected Data Good performance metrics must comply to a set of requirements: 1. Understandable, i.e. are the metrics easy to interpret? 2. Reliable, i.e. are produce values always in accordance with real values? 3. Accurate, i.e. are produced values precise or very close to real values? 4. Repeatable, i.e. are produced measurements values possible to repeat? 5. Linear, i.e. are produced figures proportional to the changes in the tested object? 6. Consistent, i.e. are metric units and their definition the same on tested platforms? 7. Computable, i.e. are metric units and definitions, measurement accuracy, and measurement methods precise enough to enable computations?
  • 17. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 17 Trusted Measurement Value(s) Metric identifiers - What ……. - Where ….. - When …… Metric formats - Value type - Unit - Accuracy (+/- %) Measurement conditions - How ……. Metric types - Raw data - Normalized data - Transformed data - Derived data Four groups of attributes describing collected performance data are required to make performance characteristics trusted and comparable. What makes performance figures trusted and comparable? Requirements on Performance Data
  • 18. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 18 1. Measurement conditions Measurement conditions describe what must apply when data are captured. Measurement conditions - How ……. Trusted Measurement Value(s) There are two kinds of captured performance data: 1. Data that is, or is part of the requested performance characteristics 2. Data that shows actual status of required conditions when requested performance data were collected! Recorded data about actual conditions during the performance measurements are extremely important for the validity of captured performance data. If you can’t verify that presented performance data were collected under requested conditions they can’t be trusted and are consequently worthless! Requirements on Performance Data
  • 19. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking Trusted Measurement Value(s) 19 2. Metric identifiers The need for metric identifiers applies both for requested performance data and required condition measurement data. Without metric identifiers performance data don’t represent anything! Collected performance data must have identifiers for : 1. What they represent 2. Where they were captured (logical or physical locations). 3. When they were captured (during the performance measurement). Metric identifiers - What ……. - Where ….. - When …… Metric identifiers provide basic information about collected performance data. Requirements on Performance Data
  • 20. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 20 3. Metric types Metric types are required to make performance figures comparable. Without metrics types performance characteristics can’t be compared! Performance data belong to one of four type: 1. Raw data, such as response time. 2. Normalized data, such as transactions per second. 3. Transformed data, such as memory usage in Mbyte transformed to memory usage as percentage of total memory. 4. Derived data, performance characteristics resulting from a well defined computation. Metric types provide information about the kind of performance data. Trusted Measurement Value(s) Metric types - Raw data - Normalized data - Transformed data - Derived data Requirements on Performance Data
  • 21. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 21 4. Metric formats Metric formats are required to make performance figures comparable, such as miles per hour and kilometers per hour. Without metrics formats performance characteristics can’t be compared ! Performance data has three format attributes: 1. Value type data (time, counters, size, etc.) 2. Unit data (For example time: Hour, sec, mSec) 3. Accuracy data (the precision or quality of data) Metric formats provide information about value type, scale, and precision. Trusted Measurement Value(s) Metric formats - Value type - Unit - Accuracy (+/- %) Requirements on Performance Data
  • 22. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 22 Measured Objects
  • 23. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 23 Measured objects IMS is a good example of a distributed service P-CSCF AS OSA-SCS IM-SSF BGCFMGCF MRF-CSGW MGW MRF-P UE The IMS media plane The IMS signalling plane I-CSCF S-CSCF P-CSCF UE Application services AAA services Control services Cx Dx ShSh Gm Mw GmMw Mw Mg Mj MrMi ISC Ut IP Mn Mp RTP SIP XCAP Diameter HSS SLF IMS has a number of defined services interconnected by a set of protocols.
  • 24. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 24 Measured objects What is actually measured? From performance characteristics of single IMS components P-CSCF AS OSA-SCS IM-SSF BGCFMGCF MRF-CSGW MGW MRF-P UE The IMS media plane The IMS signalling plane I-CSCF S-CSCF P-CSCF UE Application services AAA services Control services Cx Dx ShSh Gm Mw GmMw Mw Mg Mj MrMi ISC Ut IP Mn Mp RTP HSS SLF
  • 25. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 25 Measured objects What is actually measured? P-CSCFUE Gm UEMw Performance test tool Measured component From performance characteristics of single IMS components SIP
  • 26. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 26 Measured objects What is actually measured? P-CSCF AS OSA-SCS IM-SSF BGCFMGCF MRF-CSGW MGW MRF-P UE The IMS media plane The IMS signalling plane I-CSCF S-CSCF P-CSCF UE Application services AAA services Control services Cx Dx ShSh Gm Mw GmMw Mw Mg Mj MrMi ISC Ut IP Mn Mp RTP HSS SLF To performance characteristics of IMS services
  • 27. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 27 Measured objects What is actually measured? P-CSCFUE I-CSCF S-CSCF P-CSCF UEGm Mw GmMw Mw Performance test tool Measured Service: A call setup To performance characteristics of IMS services SIP
  • 28. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 28 Measured objects Service characteristics Provided services can be very different, here are two examples: Service initiation characteristics UE IMS Publication Server UE Pulled service Publish Pushed service Notify Service duration characteristics time VARIABLE REGISTRATION SUBSCRIPTION NOTIFICATION CALL LONG LONG SHORT
  • 29. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 29 Measured objects Service design SINGLE STEP SERVICE time UE P-CSCF 1 200 SUB time UE P-CSCF 1 401 REG unauth 2 200 REG auth 3 200 SUB time UE P-CSCF 1 183 INV 2 200 PRA 3 200 UPD 180 200 4 ACK MRFP codec codec codec codec 5 200 BYE MULTI STEP SERVICE COMPOSITE SERVICE Provided services can have different designs, here are three examples:
  • 30. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 30 Measured objects Service interfaces SUT Test Tool API Communication Protocol SUT Test Tool Communication Protocol Provided services can be provided with different interfaces, here are two examples:
  • 31. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 31 Performance Test Environment
  • 32. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 32 Test site and test beds Performance Test Environment Test Site Infrastructure Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test Site Server Test bed 2 SUT 2Test System 2 Test bed 1 SUT 1Test System 1 Test bed 3 SUT 3Test System 3 A test site is a collection of hardware installed for performance measurements. Test beds are how different SUT:s utilize the test site.
  • 33. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 33 Layers in a System Under Test (SUT) Performance Test Environment SUT Tested ComponentsApplication Supporting Components (SC) Middleware Operating system Hardware An application is the target for performance measurements.
  • 34. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 34 A test bed with service requesting and service responding tools Performance Test Environment SUT Service Responding Tool Service Requesting Tool A SUT can be surrounded by test tools playing different roles.
  • 35. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 35 Example of front-end border and back-end border Performance Test Environment Back End SIP / Mw Front End SIP / Gm SUT IMS / P-CSCF Service Requesting Tool Service Responding Tool < A test tool has one or more borders to a System Under Test.
  • 36. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 36 Example of a SUT with simulated components (HSS) Performance Test Environment a. SUT HSS S-CSCFP-CSCFSimulated UEs Diameter/ Cx SIP Mw SIP Gm b. SUT HSS S-CSCFP-CSCFSimulated UEs Diameter/ Cx SIP Mw SIP Gm Components of a System Under Test can be replaced by test tools. In this case an HSS.
  • 37. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 37 Monitoring performance tests Performance Test Environment TS (Test System)TS (Test System) SUT Internal Performance Data TS Service Responder TS Service Requestor Probes External Performance Data External Performance Data Performance Test Monitoring Tools On-line user Alerts On-line user On-line user Logging A performance test tool shall support monitoring of an ongoing test.
  • 38. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 38 General Performance Test Concepts
  • 39. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 39 Delivery system Load Simulation Monitoring When: Development Pre Deployment Post Deployment Why: Measure limits Verify requirements Confirm requirements What: Isolated services Complex traffic Proactive monitoring . Critical services . Development system Production system General Performance Test Concepts Performance measurements are required during the whole life cycle
  • 40. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 40 Recording of external and internal performance data TS (Test System)TS (Test System) SUT Internal Performance Data Recorder TS Service Responder TS Service Requestor Probes External Performance Data Recorder External Performance Data Recorder There are two types of performance data: 1. What is captured externally or outside the System Under Test (SUT). 2. What is captured internally or inside the SUT (with probes or profiling tools). General Performance Test Concepts
  • 41. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 41 Requirements on Performance Test Specifications
  • 42. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 42 A performance test specification has many elements Performance test specification elements Test objectives Test conditions Test configurations Test data specifications Test evaluation specifications Requirements on Performance Test Specifications
  • 43. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 43 The core of a test case is the service scenarios to be used A performance test case requires working services to be tested. UA SIDE A UA-SIDE-B ==================== ==================== | | INVITE ------------------------------| |---------------------------- 100 INV |---------------------------- 180 INV PRACK ------------------------------| |---------------------------- 200 PRA |---------------------------- 180 INV PRACK ------------------------------| |---------------------------- 200 PRA | | | Ring timer 15 sec. | | |---------------------------- 200 INV ACK -------------------------------| | | Hold timer 180 sec. | | | BYE -------------------------------| |---------------------------- 200 BYE | | Exit timer 4 sec. | | | End Requirements on Performance Test Specifications
  • 44. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 44 Workload Concepts
  • 45. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 45 Workload Concepts The description of what the measured system shall produce A workload has three parts, each with many variations: 1. The mix of requested services and the weight of each service. 2. The amount of each requested service (can be described in several ways). 3. The time distribution of requested services, or the service request load. The service request load can be defined in two ways: 1. User session driven load (suited for web performance measurements). 2. Traffic rate driven load (suited for telecom performance measurements).
  • 46. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 46 Performance Data Processing
  • 47. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 47 Performance data processing has many steps Performance Data Processing Measured Object Measurement values time V1 V2 V3 V4 V5 V6 V7 V8 Collected measurement data are usually time series of measurement values. Different measurement values are synchronized by timestamps.
  • 48. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 48 Performance Measurement data flow TEST JOB KPI CALCS. KPI ANALYSIS Analysed KPI KPI EVALUATION Eval. KPI DATA VALIDATION DATA SELECTION DATA CAPTURE Raw Data Selected Variables KPI Eval. Config. KPI Analysis Config. KPI Formula Config. Test Cond. Config. Control Data Valid Data KPI Measurem. Config. Performance data processing has many steps: Here for KPI production
  • 49. © Copyright 2019, SoftWell Performance AB Performance measurements according to ITU-T Q.3930 and considerations in benchmarking 49 “To measure is to know” - Lord Kelvin “To benchmark is to know” - Michael Mild I end this presentation with another statement by Lord Kelvin