This presentation shares expertise and insight on Automated Low Level Requirements Testing for DO-178C:
• DO-178C SW Verification Process
• Software Testing Activities
• Software Testing Stages
• Test Coverage Analysis
• Software Testing Activities
• Structural Coverage Analysis
• Requirements Based Test Selection
• Manual Test Generation
• So…How to Automate?
• Generation from Requirements
• Generation from Code
• AutoTest & Trace for DO-178C
• AutoTest Generation
• AutoTest Process
• AutoTest DO-178C Use Cases
For more information, please refer to: https://www.qa-systems.com/
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoam
Automated Low Level Requirements Testing for DO-178C
1. 0Copyright 2018 – QA Systems GmbH www.qa-systems.com
Automated
Low Level Requirements Testing
for DO-178C
2. 1Copyright 2018 – QA Systems GmbH www.qa-systems.com
DO-178C SW Verification Process
Software Verification Cases and Procedures
DO-178C 11.13
Software Verification Results
DO-178C 11.14
Associated Trace Data
DO-178C 11.21
INPUTS OUTPUTS
Verification
Software Requirements
Source Code
System Requirements
Software Architecture
Trace Data
Executable Object Code
Parameter Data
Software Verification Plan
3. 2Copyright 2018 – QA Systems GmbH www.qa-systems.com
Software Testing Activities 1
DO-178 Section 6.4
Requirements Based Test Objectives
a. The Executable Object Code complies with the
high level requirements
b. The Executable Object Code is robust with the
high level requirements
c. The Executable Object Code complies with the
low level requirements
d. The Executable Object Code robust with the low
level requirements
e. The Executable Object Code is compatible with
the target computer
4. 3Copyright 2018 – QA Systems GmbH www.qa-systems.com
Software Testing Stages
System
Requirements
HSI
Tests
To verify the implementation of
low-level requirements +
derived low-level requirements
To verify the interrelationships between software
requirements and components and to verify the
implementation of the software requirements
and software components within the software
architecture
To verify the correct operation of the
software in the target environment
DO-178 Section 6.4
Table A.6Parameter Data
Low Level
(unit) Tests
Low Level
Requirement
Derived
Low Level
Requirement
SW
Integration
Test
Code
High Level
Requirement
5. 4Copyright 2018 – QA Systems GmbH www.qa-systems.com
Software Testing Activities 2
DO-178 Section 6.4.4.1
Test Coverage Analysis Objectives
a. Analysis using the associated Trace Data, to
confirm that the test cases exits for each
software requirement
b. Analysis to confirm the test cases satisfy the
criteria for normal and robustness testing as
defined in section 6.4.2
c. Resolution if any deficiencies identified in the
analysis. Possible solutions are adding or
enhancing test cases.
d. Analysis to confirm that all the test cases, and
thus all the test procedures, used to achieve
structural coverage, are traceable to
requirements.
6. 5Copyright 2018 – QA Systems GmbH www.qa-systems.com
Test Coverage Analysis
Low Level
Requirement
Code
Low Level
(unit) Test
Requirements Coverage
% Requirements verified by tests
<-> Traceability to tests
Test Coverage
% tests executed & passing
<-> Traceability to requirements
Parameter Data
7. 6Copyright 2018 – QA Systems GmbH www.qa-systems.com
Software Testing Activities 3
DO-178 Section 6.4.4.2
Structural Coverage Analysis Objectives
a. Analysis of the structural coverage information
collected during requirements-based testing to
confirm that the degree of structural coverage is
appropriate to the software level.
b. Structural coverage analysis may be performed
on the Source Code, object code or Executable
Object Code. {NB - Additional verification for
additional code not directly traceable to Source
Code}
c. Analysis to confirm that the requirements-based
testing has exercised the data and control
coupling between code components.
d. Structural coverage analysis resolution
8. 7Copyright 2018 – QA Systems GmbH www.qa-systems.com
Software Testing Activities 4
DO-178 Section 6.4.4.3
Structural Coverage Resolution Objectives
a. Shortcomings in requirements-based test cases
or procedures.
b. Inadequacies in software requirements.
c. Extraneous code, including dead code.
d. Deactivated code.
1. Category One
2. Category Two
9. 8Copyright 2018 – QA Systems GmbH www.qa-systems.com
Structural Coverage Analysis
Low Level
Requirement
Code
Low Level
(unit) Test
Code Coverage
% code executed by tests
<->Traceability to requirements
<->Traceability to tests
Parameter Data
10. 9Copyright 2018 – QA Systems GmbH www.qa-systems.com
Requirements Based Test Selection
DO-178 Section 6.4.2
Requirements-Based Test Selection
1. Specific test cases should be developed to
include normal range test cases and robustness
(abnormal range) test cases.
2. The specific test cases should be developed from
the software requirements and the error sources
inherent in the software development processes.
Note: Robustness test cases are requirements-based.
3. Test procedures are generated from the test
cases.
11. 10Copyright 2018 – QA Systems GmbH www.qa-systems.com
Manual Test Generation
Test Cases crafted Manually from Requirements
Can be hard work! – Even with powerful test tools
Insufficiently validated requirements
Decomposed, Correct, Complete, Unambiguous, Logically consistent
High-reliance on structural code coverage & reverse engineering
Complexity of test vectors
Pre-conditions, Inputs, Expected behaviours & outputs, post-conditions
Boundary of Low Level Requirements ≠ usable test case vectors
Test Framework: Drivers, Dependencies & Datasets
Gaps & Overlaps
Defensive programming, private/protected code etc
Equivalence classes
12. 11Copyright 2018 – QA Systems GmbH www.qa-systems.com
Requirements Tests
Parameter Data
Code
So…How to Automate?
13. 12Copyright 2018 – QA Systems GmbH www.qa-systems.com
Tests
Generation from Requirements
Test Case Generation
Test Cases Generated from Requirements
Very limited capability from:
NL, SNL, PDL,
Use Case Scenarios
Mathematical specs
More capability from
Models (e.g. MBT with UML)
Requirements
14. 13Copyright 2018 – QA Systems GmbH www.qa-systems.com
Generation from Code
Test Cases Generated from Code
Test Vectors from path solving
Intelligent optimisation
Full test framework
Pre-conditions, Inputs,
Expected behaviours,
Expected outputs & post-conditions
Tests generated for
maintainability & traceability Code
Tests
AutoTest
Requirements
15. 14Copyright 2018 – QA Systems GmbH www.qa-systems.com
AutoTest & Trace for DO-178C
AutoTest
16. 15Copyright 2017 – QA Systems GmbH www.qa-systems.com
AutoTest Generation
Flexible application
GUI or CLI invocation
Complete suite of passing unit tests
Additional test cases to fill gaps
Black-box cluster integration test through public functions
White-box unit isolation test of static functions
Uses Cantata workspace preferences
Test cases exercise all paths through the code
Entry-Point
Statement
Decision
MC/DC (unique cause)
Test Cases are complete & maintainable for full control
All required inputs: parameters + accessible data
All expected outputs: parameters + accessed data + call-order
Each test case path solving purpose explained
17. 16Copyright 2017 – QA Systems GmbH www.qa-systems.com
Build RunTest
Exe
Instruments
AutoTest
Makefiles
Tests
Code
AutoTest Process
Code Copy
Generation
Report
Test Results
Automatic Test Generation
Automatic Test Execution
18. 17Copyright 2017 – QA Systems GmbH www.qa-systems.com
AutoTest DO-178C Use Cases
Source Code Testability Assessment
The AutoTest Generation Report may be used to identify difficulties in creating low-level Cantata
test cases for the software or potential run-time errors:
Dynamically unreachable code Crash scenarios Compiler type truncation
Data un-initialized or function static Implicit function declarations
Test Cases for Assignment as Requirements-based tests
Generated test cases may be reviewed and used (unaltered or modified) to meet requirements
based verification objectives
Test cases can be assigned to requirements in Trace once assessed as meeting the objectives
NOTE: DO-178C 6.4.4.1.d. Requires all test cases used to achieve structural coverage are traceable
to requirements.
Targeted Test Case Generation
Test cases can be generated for all functions in a source file
Test cases can be added to test scripts for selected functions to help achieve structural code
coverage requirements (e.g. MC/DC)
19. 18Copyright 2017 – QA Systems GmbH www.qa-systems.com
Traced requirements, test
status and code coverage
Test Information
.csv ReqIF Excel
Requirements
Requirements
Management Tool
Full bi-directional
requirements traceability
evidence
Drag and drop tracing of requirements
(text, diagrams, links) with test cases.
Generate tests
link to requirements
Test Tool
Requirements Trace Closes Loop
20. 19Copyright 2017 – QA Systems GmbH www.qa-systems.com
Easy Linking in Cantata Trace
Bi-directional drag and drop interface, immediately creates links on a server
Whole Test Scripts linked to Requirements
Individual Test Cases linked to Requirements
21. 20Copyright 2017 – QA Systems GmbH www.qa-systems.com
3 Part Automation
1 Automatic Test Vector Generation
Test case vectors from code exercising all paths (up to MC/DC coverage)
Sets input parameters & data throughout test execution
Checks expected vs actual data, input & output parameters and call order
3 Automated Traceability & Coverage Data Production
Complete Requirements imported/exported for testing
AutoTest cases generated with traceable descriptions
Test status, Requirements traceability & Structural coverage evidence
2 Automated Test Execution
Continuous integration build, run and reporting
22. 21Copyright 2018 – QA Systems GmbH www.qa-systems.com
Complete 3 Way Analysis
Low Level
Requirement
Code
Low Level
(unit) Test
Parameter Data
Requirements Coverage
See requirements coverage in
your requirements
management & test tools
Use the same tool for all
trace data
Test Coverage
Run tests when not executed
(continuous integration and
testing helps a lot)
Fix tests when they fail
Code Coverage
When you have gaps, identify if the code is:
dead / redundant, unreachable, deactivated (not used in this context)
If not, then add a test and that needs to be traced to [new] requirements
24. 23Copyright 2018 – QA Systems GmbH www.qa-systems.com
If code already handles these – then AutoTest generation is very helpful
If code does not – then Traceability should catch them as AutoTest will not
But AutoTest could generate test cases for these scenarios too…
Further Enhancements? - Robustness
DO-178 Sections 6.4.3 identifies Normal & Robustness Test Cases
Robustness test cases demonstrate the ability of the software to respond to abnormal inputs and conditions.
• Failure of an algorithm to satisfy a software requirement.
• Incorrect loop operations.
• Incorrect logic decisions.
• Failure to process correctly legitimate combinations of input
conditions.
• Incorrect responses to missing or corrupt input data.
• Incorrect handling of exceptions, such as arithmetic faults or
violations of array limits.
• Incorrect computation sequence.
• Inadequate algorithm precision, accuracy, or performance.
• Incorrect initialization of variables and constants.
• Parameter passing errors.
• Data corruption, especially global data.
• Inadequate end-to-end numerical resolution.
• Incorrect sequencing of events and operations.
It sometimes happens that high-level requirements are used to generate source code directly, in which cases those high-level requirements are also considered to be low-level requirements.
October 27 2017 –The Federal Aviation Administration (FAA) and the Civil Aviation Administration of China (CAAC) today announced the signing of an implementing agreement under the U.S. – China Bilateral Aviation Safety Agreement (BASA) recognizing each other’s regulatory systems with respect to the airworthiness of aviation products and articles.
The Implementation Procedures for Airworthiness (IPA) document allows each authority to leverage approvals completed by the other with respect to design, production, and airworthiness as well as continued airworthiness.