2. Diff. With in-house
components
Interface (pre and post conditions) are not
clearly specified.
No Arch. and code.
Black boxes to component user.
Why use COTS
3. Why COTS Testing
Failure of Ariane5.
• explosion resulted
from insufficiently
tested software
reused from the
Ariane 4 launcher.
5. Why rigorous evaluation of
COTS?
Large number of alternative products.
Multiple stakeholders.
Large number of Quality criteria.
Compatibility with other products.
6. Why evaluation difficult
Large number of evaluation criteria.
Different opinions are usually encountered among
different stakeholders.
Evaluation criteria are not easily measurable at
evaluation time.
Gathering relevant info. is prohibitively expensive.
COTS market is changing fast, evaluation must be
performed several times during lifecycle.
Evaluation deals with uncertainty info.
7. AHP Technique
Originally designed for economic and
political science domains.
Requires a pair wise comparison of
alternatives and pair wise weighting of
selection criteria.
Enables consistency analysis of comparisons
and weights, making possible to assess
quality of gathered info.
8. AHP Technique (contd.)
Allows alternatives to be measured on a ratio
scale,we can determine how much better an
alternative compared to other.
Practically usable if number of alternatives
and criteria are sufficiently low, because
comparisons are made by experts.
9. Selection in practice
Follows three stages
Informal screening for a set of requirements
using selection thresholds.
More systematic evaluation using AHP
process.
Detailed Information gathering involves
testing, prototyping and reading technical
documents.
11. How to provide information to
user
Component meta-data approach.
Retro-components approach.
Component test bench approach.
Built-in test approach.
Component+ approach.
STECC strategy.
15. Component test bench
approach
A set of test cases called test operation is
associated with each interface of a
component.
A test operation defines the necessary steps
for testing a specific method.
The concrete test inputs and expected test
output packaged in a test operation.
19. Component+ approach
Tester
Built-in testing enabled component
Test case
generator
Functionality
Handler
Test executor
Failure
Recovery
mech.s Interface
20. Disadv. of BIT and
component+
Static nature.
Generally do not ensure that tests are
conducted as required by the component user
The component provider makes some
assumptions concerning the requirements of
the component user, which again might be
wrong or inaccurate.
21. STECC strategy
query
functionality
Metadata Meta
Req. Server DB
Tester
Metadata
Test
generator
24. Certifying COTS
When considering a candidate component, developers
need to ask three key questions:
Does component C fill the developer’s needs?
Is the quality of component C high enough?
What impact will component C have on system S?
27. Black box Testing
To understand the behavior of a component,
various inputs are executed and outputs are
analyzed.
To catch all types of errors all possible
combinations of input values should be
executed.
To make testing feasible, test cases are
selected randomly from test case space.
28. Black box test reduction using
Input-output Analysis
Random Testing is not complete.
To perform complete functional testing,
number of test cases can be reduced by
Input-output Analysis.
29.
30.
31. How to find I/O relationships
By static analysis or execution analysis of
program.
32. Fault Injection
request
Fault
Fault
Component simulation
simulation
tool
tool
Exceptions,
Erroneous
No response
or malicious
input
33. Operational System Testing
complements system-level fault injection.
System is operated with random inputs (valid
and invalid inputs)
Provides more accurate assessment of COTS
quality.
To ensure that a component is a good match
for the system.
40. COTS testing for OS failures
COTS Operating
Wrapper
component System
41. Ballista approach
Based on fault injection technique.
Test cases are generated using parameter
types of an interface.
Independent of internal functionality.
Testing is not complete.
43. Test value Data Base(contd.)
Integer data type: 0, 1, -1, MAXINT, -MAXINT,
selected powers of two, powers of two minus one,
and powers of two plus one.
Float data type: 0, 1, -1, +/-DBL_MIN, +/-
DBL_MAX, pi, and e.
Pointer data type: NULL, -1 (cast to a pointer),
pointer to free’d memory, and pointers to malloc’ed
buffers of various powers of two in size.
44. Test value Data Base(contd.)
String data type (based on the pointer base type):
includes NULL, -1 (cast to a pointer), pointer to an
empty string, a string as large as a virtual memory
page, a string 64K bytes in length.
File descriptor (based on integer base type): includes
-1;MAXINT; and various descriptors: to a file open for
reading, to a file open for writing, to a file whose offset is
set to end of file, to an empty file, and to a file deleted after
the file descriptor was assigned.
45. Test case generation
All combinations of values for the parameter
types are generated.
Number of test cases generated are product
of number of parameters and test base for
that type.
46. Error propagation analysis
Interface Propagation Analysis is used by injecting
faults at one component.
This is done at component integration level.
A known faulty input is injected using fault injector
into the system.
Components effected by this input are observed
(how they handle the faulty input).
48. Middleware
Application’s execution and Middleware
cannot be divorced in any meaningful
way.
In order to predict the performance of
application component, performance of
its middleware should be analyzed.
49. Performance prediction
Methodology
Application’s performance prediction is
three step process.
Obtaining Technology performance.
Analyzing Architecture specific
behavioral characteristics.
Analyzing Application specific
behavioral characteristics.
54. Effect of database access thru
Middleware
Container
Session
bean DB
Entity
bean
The performance of the entity bean architecture is
less than 50% of the performance of the session
bean only Architecture.
55. Effect of Server Thread
The performance increases from 2 threads to
32 threads, stabilizes around 32 to 64
threads, and gradually decreases as more
threads are added due to contention.
56. The Effect of Client Request
Load.
Client response time increases with
concurrent client request rate due to
contention for server threads.
57. Effect of Database Contention
Effect of database contention leads to
performance that is between 20% and 49%.
60. Load Testing
It is just Performance testing under various
loads.
Performance is measured as Connections per
second (CPS), throughput in bytes per
second, and round trip time (RTT) .
61. Load Test Application
Load test
App
Ethernet
Web System Under Test
server
App DB
server server
62. Testing strategy
Load tests will be conducted in three phases.
1. Consumption of server resources as a function
of the volume of incoming requests will be
measured.
2. Response time for sequential requests will be
measured.
3. Response time for concurrent client request
load will be measured.
71. I-BACCI process
1. Decomposing the binary file of the component;
and filtering trivial information.
2. Comparison the code sections between the two
versions.
3. Identification of glue code functions.
4. Identification of change propagation in other
components/system.
5. Selection of test cases to cover only the affected
glue code functions (functions in firewall).
73. Methods for understanding
Binary reverse Engg.
Interface probing.
Partial automation of interface probing.
74. Binary reverse Engg.
Derives the design structure (call graph,
control graph) from binary code.
Source code can also be partially extracted
using decompilation.
Decompiled source code will have no comments
and variable names will not be meaningful.
Licenses forbid decompilation back to source
code.
75. Interface probing
System Developer designs a set of test cases,
executes, and analyzes outputs.
Done in an iterative manner.
76. Disadvantages
A large number of test cases have to be
generated and analyzed.
Some properties may require significant
probing which may be tedious,labor intensive,
expensive.
Developers miss certain limitations and make
incorrect assumptions.
77. Partial Automation of interface
probing
Based on interface probing.
Test cases are generated based on scenarios.
Testing is done in three phases
1. Scenario description phase.
2. Search space specification phase.
3. Test case generation phase.