This document discusses automated requirements-based testing for ISO 26262. It describes the software verification phases in ISO 26262 and the methods for deriving test cases from requirements. Requirements-based testing involves generating tests from requirements to ensure requirements coverage and traceability. The document outlines a process for automated test generation, execution and traceability to provide evidence of testing according to ISO 26262 standards.
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
Automated requirements based testing for ISO 26262
1. 0Copyright 2019 – QA Systems GmbH www.qa-systems.com
Automated
Requirements-Based Testing
for ISO 26262
2. 1Copyright 2019 – QA Systems GmbH www.qa-systems.com
ISO 26262 SW Verification Phases
SW Architectural
Design
SW Unit
Design
Code
Implementation
SW Safety
Requirements
SW Unit
Verification
SW Integration
& Verification
Testing of the
Embedded SW
Safety measures properly implemented
Complies with unit design & fulfils ASIL SW requirements
No undesired functionality or functional safety properties
Safety measures properly implemented
Fulfils architectural design
No undesired functionality or functional safety properties
Fulfils safety-related requirements in target environment
No undesired functionality or functional safety properties
Part 6:
Product Development at the
software level
Phases 9 – 11
Tables 7 – 15
7
6
8 9
8
11
10
Configuration
& Calibration
Data
3. 2Copyright 2019 – QA Systems GmbH www.qa-systems.com
Requirements Verification Method
ISO 26262 Table 7 – Methods for software unit verification
ISO 26262 Table 10 – Methods for verification of software integration
ISO 26262 Table 14 – Methods for tests of the embedded software
4. 3Copyright 2019 – QA Systems GmbH www.qa-systems.com
Deriving Test Cases from Requirements
ISO 26262 Tables 8 – Methods for deriving test cases for software unit / integration testing
ISO 26262 Table 15 – Methods for deriving test cases for the test of the embedded software
ISO 26262 Table 11 – Methods for deriving test cases for software integration testing
5. 4Copyright 2019 – QA Systems GmbH www.qa-systems.com
Requirements Based Testing (RBT)
Requirements
• Decomposed
• Correct
• Complete
• Unambiguous
• Logically consistent
Tests
• Pre-conditions
• Inputs
• Expected behaviours
• Expected outputs
• Post-conditions
Unit
Design
Code
Unit
Test
Integration
Test
Architectural
Design
Requirements Coverage
% Requirements verified by tests
<-> Traceability to tests
Code Coverage
% code executed by tests
<->Traceability to requirements
<->Traceability to tests
Test Coverage
% tests executed & passing
<-> Traceability to requirements
Safety
Requirements
Embedded
Test
Configuration &
Calibration Data
6. 5Copyright 2019 – QA Systems GmbH www.qa-systems.com
Manual Test Generation
Test Cases crafted Manually from Requirements
Can be hard work! – Even with powerful test tools
Insufficiently validated requirements
Decomposed, Correct, Complete, Unambiguous, Logically consistent
High-reliance on structural code coverage & reverse engineering
Complexity of test vectors
Pre-conditions, Inputs, Expected behaviours & outputs, post-conditions
Boundary of Low Level Requirements ≠ usable test case vectors
Test Framework: Drivers, Dependencies & Datasets
Gaps & Overlaps
Defensive programming, private/protected code etc
Equivalence classes
7. 6Copyright 2019 – QA Systems GmbH www.qa-systems.com
Requirements Tests
Code
So…How to Automate?
Configuration &
Calibration Data
8. 7Copyright 2019 – QA Systems GmbH www.qa-systems.com
Tests
Generation from Requirements
Test Case Generation
Test Cases Generated from Requirements
Very limited capability from:
NL, SNL, PDL,
Use Case Scenarios
Mathematical specs
More capability from
Models (e.g. MBT with UML)
Requirements
9. 8Copyright 2019 – QA Systems GmbH www.qa-systems.com
Generation from Code
Test Cases Generated from Code
Test Vectors from path solving
Intelligent optimisation
Full test framework
Pre-conditions, Inputs,
Expected behaviours,
Expected outputs & post-conditions
Tests generated for
maintainability & traceability Code
Tests
AutoTest
Requirements
10. 9Copyright 2019 – QA Systems GmbH www.qa-systems.com
Why use Coverage & Traceability?
Standard Compliance – the 100% Picture
Bi-directional requirements traceability
All executable code is justified tested
Evidence of success is: passing tests + traceability
Helps Ensure Completeness
Changed requirements capture & validation
Just enough code changes
Test case design updates
RBT processes can be most effective when iterative
Code
TestReq
11. 10Copyright 2019 – QA Systems GmbH www.qa-systems.com
AutoTest + Trace for ISO 26262
AutoTest
12. 11Copyright 2017 – QA Systems GmbH www.qa-systems.com
AutoTest Generation
Flexible application
GUI or CLI invocation
Complete suite of passing unit tests
Additional test cases to fill gaps
Black-box cluster integration test through public functions
White-box unit isolation test of static functions
Uses Cantata workspace preferences
Test cases exercise all paths through the code
Entry-Point
Statement
Decision
MC/DC (unique cause)
Test Cases are complete & maintainable for full control
All required inputs: parameters + accessible data
All expected outputs: parameters + accessed data + call-order
Each test case path solving purpose explained
13. 12Copyright 2017 – QA Systems GmbH www.qa-systems.com
Build RunTest
Exe
Instruments
AutoTest
Makefiles
Tests
Code
AutoTest Process
Code Copy
Generation
Report
Test Results
Automatic Test Generation
Automatic Test Execution
14. 13Copyright 2017 – QA Systems GmbH www.qa-systems.com
Example AutoTest Exercise
• 541 Source Files
• 807 C Functions
• 55,151 Executable LoC
• 4,901 McCabe total complexity
Source
Files
• 93% Fully executed
• 95% Fully executed
• 95%+ Fully executed
• 5,035 Total test cases
• 336,355 Total checks
Tested
Source
Files
Execution
36 minutes
Generation
2.03 hours
15. 14Copyright 2017 – QA Systems GmbH www.qa-systems.com
Traced requirements, test
status and code coverage
Test Information
.csv ReqIF Excel
Requirements
Requirements
Management Tool
Full bi-directional
requirements traceability
evidence
Requirements Trace Closes Loop
Generate tests
link to requirements
Test Tool
16. 15Copyright 2017 – QA Systems GmbH www.qa-systems.com
Easy Linking in Cantata Trace
Bi-directional drag and drop interface, immediately creates links on a server
Whole Test Scripts linked to Requirements
Individual Test Cases linked to Requirements
17. 16Copyright 2019 – QA Systems GmbH www.qa-systems.com
3 Part Automation
1 Automatic Test Vector Generation
Test case vectors from code exercising all paths (up to MC/DC coverage)
Sets input parameters & data throughout test execution
Checks expected vs actual data, input & output parameters and call order
3 Automated Traceability & Coverage Data Production
Complete Requirements imported/exported for testing
AutoTest cases generated with traceable descriptions
Test status, Requirements traceability & Structural coverage evidence
2 Automated Test Execution
Continuous integration build, run and reporting
18. 17Copyright 2019 – QA Systems GmbH www.qa-systems.com
Complete 3 Way Analysis
Requirements
Code
Tests
Requirements Coverage
See requirements
coverage in your
requirements
management & test tools
Use the same tool for all
trace data
Test Coverage
Run tests when not executed
(continuous integration and
testing helps a lot)
Fix tests when they fail
Code Coverage
When you have gaps, identify if the code is:
dead / redundant, unreachable, deactivated (not used in this context)
If not, then add a test and that needs to be traced to [new] requirements
Configuration &
Calibration Data
20. 19Copyright 2019 – QA Systems GmbH www.qa-systems.com
Further Enhancements?
ISO 26262 Table 7 – Methods for software unit verification
ISO 26262 Table 10 – Methods for verification of software integration
ISO 26262 Table 14 – Methods for tests of the embedded software
In Development
21. 20Copyright 2019 – QA Systems GmbH www.qa-systems.com
Thank you
Vielen Dank
Editor's Notes
9.4.4 (unit) & 10.4.4 (Integration)
To evaluate the completeness of verification and to provide evidence that the objectives for unit testing are adequately achieved, the coverage of requirements at the software unit / architectural level shall be determined and the structural coverage shall be measured in accordance with the metrics as listed in Table 9 / Table 12.
In this example, Cantata AutoTest was run on over 55 kloc of executable C code.
[NEXT]
A complete suite of Cantata in-depth isolation unit tests for 100% entry-point, statement and decision coverage were generated in just over 2 hours.
These tests were then executed using the automatic Cantata Makefile structure in just over ½ an hour.
[NEXT]
The code coverage achieved on these baseline tested source files was incredibly high. With more than 6 dynamic checks per line of code, and a remarkably optimal set of only 5,000 test cases for over 4,900 decision outcomes (McCabe Cyclomatic complexity), this provided a highly efficient and effective baseline safety net of unit tests.
The Test Results Summary reports the overall results for tests executed using Cantata Makefiles, and all the usual powerful Cantata results diagnostics are available for the baseline tests. The only failures were the 40 files where coverage targets were not met.