Automated Test Suite Generation for Time-Continuous Simulink Models
1. .lusoftware verification & validation
VVS
Automated Test Suite Generation for
Time-Continuous Simulink Models
Reza Matinnejad
Shiva Nejati
Lionel Briand
SnT Center, University of Luxembourg
Thomas Bruckmann
Delphi Automotive Systems, Luxembourg
8. Incompatibility with"
the Underlying Technique
8
• The techniques rely on SAT/Constraint solvers, and
inherit their limitations in handling
• Time-continuous blocks
• Complex mathematical functions
• Floating-point operations
9. Simulink Testing Challenge II
Low Fault-Revealing Ability
Existing testing techniques make unrealistic
assumptions about test oracles
9
10. Low Fault-Revealing Ability
Challenge
• Testing is mainly driven by structural coverage
• Structural coverage might be effective when automated
test oracles are available
10
• Test oracles are likely to be manual in practice
• Covering a fault may not help reveal it.
11. Faulty Model Output
11
Correct Model Output
Low Fault-Revealing Ability
Example
Covers the fault and
Covers the fault but
is Likely to reveal it
is very unlikely to reveal it
14. 14
Search-Based Test Generation
Initial Test Suite
Slightly Modifying
Each Test Input
Repeat
Until maximum resources spent
S Initial Candidate Solution
Search Procedure
R Tweak (S)
if Fitness (R) > Fitness (S)
S R
Return S
Output-based Heuristics
18. 18
Output Diversity -- Feature-Based
increasing (n) decreasing (n)constant-value (n, v)
signal features
derivative second derivative
sign-derivative (s, n) extreme-derivatives
1-sided
discontinuity
discontinuity
1-sided continuity
with strict local optimum
value
instant-value (v)
constant (n)
discontinuity
with strict local optimum
increasing
C
A
B
19. 19
Evaluation
How does the fault revealing
ability of our algorithm
compare with that of
Simulink Design Verifier?
20. Simulink Design Verifier (SLDV)
• Underlying Technique: Model Checking and SAT
solvers
• Test objective: Testing is guided by structural
coverage
20
21. Our Approach vs. SLDV
21
Faults
1 2 3 4 5 6 7 8 9 10 11
12 13 14 15 16 17 18 19 20 21 22
SLDV
SLDV
5 14 2 20 20 20 20 20 20 15 15
Faults
SLDV could not find the fault
SLDV found the fault
20 16 20 11 5 20 14 17 11 20 4
Our
Approach
Our
Approach
• Our approach outperformed SLDV in revealing faults
# The number of fault-revealing
runs of our algorithm (out of 20)
23. Conclusion
• We distinguished two challenges in Simulink model
testing: Incompatibility and low fault revealing ability
23
• We proposed two output-based test generation
algorithms for Simulink models: failure-based and
output diversity
• Our output diversity test generation algorithm
outperformed Simulink Design Verifier in revealing
faults in Simulink models
24. .lusoftware verification & validation
VVS
Automated Test Suite Generation for
Time-Continuous Simulink Models
Reza Matinnejad (reza.matinnejad@uni.lu)
Shiva Nejati
Lionel Briand
SnT Center, University of Luxembourg
Thomas Bruckmann
Delphi Automotive Systems, Luxembourg
25. Incompatibility with"
the Underlying Technique
25
• The techniques rely on SAT/Constraint solvers, and
inherit their limitations in handling
• Time-continuous blocks
• Complex mathematical functions
• Floating-point operations
• Supporting library code and system functions is
cumbersome
26. 26
The output of a test case generated based on
Output Diversity
The correct output signal is not required for test generation!
Output Diversity vs. Coverage-Based
Correct Model Output
Structural Coverage
Faulty Model Output
27. Existing Simulink Testing Techniques
27
Test Oracle
Test Objective
Underlying
Technology
Model Checking
SAT/Constraint
Solvers
Specified Oracles
Manual Oracles
Violating
Assertions
Structural
Coverage
Implicit Oracles
28. Incompatibility Issues"
due to Underlying Technology
28
Test Oracle
Test Objective
Underlying
Technology
Specified Oracles
Manual Oracles
Structural
Coverage
Implicit Oracles
Model Checking
SAT/Constraint
Solvers
Violating
Assertions
29. Test Oracle Assumption
29
Test Oracle
Test Objective
Underlying
Technology
Model Checking
SAT/Constraint
Solvers
Specified Oracles
Manual Oracles
Structural
Coverage
Implicit Oracles
Specified Oracles
Implicit Oracles
Violating
Assertions
Violating
Assertions
The effectiveness of coverage-driven test generation
is not yet ascertained for Simulink testing!
Manual Oracles
Structural
Coverage
30. 30
• Model checking is not applicable to Simulink
models with time-continuous blocks
Incompatibility Issues"
due to Underlying Technology (cont.)
• Constraint solvers are not effective at handling
floating-point operations ( e.g., trig functions or
square root )
• Supporting library code and system functions is
cumbersome
31. 31
• When test oracles are manual, the existing
techniques only focus on structural coverage
• Structural coverage, although necessary, is not
sufficient to generate fault revealing test cases
for Simulink models
Low Fault Revealing Ability
when Test Oracles are Manual (cont.)
32. A
+
+
0.051
-0.05
100
0.8
+
-
1
Sum
B
Faulty Model
Faulty Model Output
Manual Test Oracle
32
Correct Model Output
• For manual test oracles, to be able to reveal faults, test
outputs should noticeably deviate from correct output
Input
Output
Test Input Generated
Based on Coverage
+
+
0.051
-0.05
100
0.8
+
-
1
Sum
A
Correct Model
33. 0 0.01 0.02 0.03 0.04 0.05
Input
0.0
1.0
0
Input
0.0
1.0
10
Why does SLDV perform poorly "
compared to our approach?
33
• Though the outputs produced by SLDV cover faulty parts of the
models, they either do not deviate or only slightly deviate from the
correct output:
0
Input
0.0
1.0
10
Test Input Generated by
Our Algorithm
Test Input Generated by
SLDV
• We conjecture SLDV poor performance is because of its test
input generation strategy
34. Simulink Testing Challenges (CPS)
• Mixed discrete-continuous behavior (combination of
algorithms and continuous dynamics)
• Inputs/outputs are signals (functions over time)
• Simulation is inexpensive but not yet systematically
automated
• Partial test oracles
34
35. 35
Signal Segments Adaptation to "
Model Coverage
P=1
P=2
P=7
…
• The algorithm starts from an initial P, e.g., P=1 and gradually increases P, only if
• Coverage has reached a plateau less than 100%
• Coverage has been actually increased the last time the algorithm increased P
36. Vector-Based Output Diversity"
Objective Function : Ov
• Generates a test suite with test outputs maximizing the
vector-based diversity function Ov for a test suite:
36
TC1
TC2
TC3
TC4
TC5
TestSuite Outputs TSO (q=5)
37. Feature-Based Output Diversity"
Objective Function : Of
• Generates a test suite with test outputs maximizing the
feature-based diversity function Of for a test suite:
37
TC1
TC2
TC3
TC4
TC5
TestSuite Outputs TSO (q=5)
38. RQ1: Sanity
38
• Our algorithm with both objective functions performed
significantly better than Random for all the test suite sizes
39. RQ2: Vector-based vs. Feature-based
39
• Feature-based diversity (Of) performed better than vector-
based diversity (Ov) for all the test suite sizes