Welcome
Jon sten, software engineer at Modelon
Today I’m going to give an introduction and insight into Optimica testing toolkit
Why do we need testing
Functionality over time
Able to update software
Ensure compliance between platforms and tools
Quality
Modelica code is the same
Look for experiment annotation,
TestCase annotations used in ModelicaCompliance library
ways for retrieving and specify good tests, especially for tool testing, find simulateable models….
Additional requirements needed by Modelon library- and tool-testing
Specification of variables
Check structure of model, equation system, jacobians
Set tool options for compilation or simulation
Tool specific scripts for setup and teardown between tests steps
Have tool specific references results for problematic tests
One tool used by OTT is
CSV Compare, open source, provided by ITI GmbH, financed by Modelica Association
Used for comparing two result files, reference and actual curve
Works by, constructing a tube around reference
Checks if actual is inside tube
UnitTesting
Modelica based
Looking for models which extends models from test library
Provides component-, condition-, and static-coverage
MoUnit
Used in OneModelica
Plugin based
Supports multiple tools
RegressionTest
* Tool for testing in dymola
Initial goal OTT – replace and unify inhouse tools for testing
There for identified key features…
Cross platform testing
Flexible test authoring, allowing for different senarios
Automated test execution and reporting, and ability to integrate into continuous integration systems
Two parts
Modularized, Plugin
Multiple tools
High-level API for common operations
Testing cycle:
Sieve
Execute
Report
Canonical form
With that in mind we have a rough overview of OTT, will go in to details over next slides
Color coded
Blue is core in blue box
Red, is plugin, input, built in or user provided
As mentioned Three parts
Sieve
Execute
report.
Input: is test spec and libraries, might be the same
Output: reports
Connection to Modelica and FMI tools
Run configuration is the “input” to test run
Configuration can come from command line or a conf. file
Input is everything needed for completing a run, we have the:
Test spec.
Libraries, might be same as test spec.
Type of the tests
Tools to use, compiler, simulator
Output processors to use
Slide
solver, compare variables,
Internals of sieve
Different sieves for different test types
Goal, extract tests and test information
In case of Modelica library
- OCT front end used, package structure and annotations
Detailed example:
A Modelica library is provided
The sieve will forward to oct fronted
Oct returns representation of lib
Recurse through lib and
Is sub package, recurse
Is class with experiment annotation, create test case
Return test case
Tester depend on test type
Two main types, static, scripted
Static
Ott controls
Each model is tested in the same predefined sequence
Scripted
Each test is programmatically defined
OTT provides high level API
Different modes:
Check model
Compile model
Compile and simulate
Compile, simulate and verify
(TestCase)
Point to illustration
Example of using OCT for compilation and verification. CSV compare used for verification
Results are passed to next step
Each test unique, more like classic unit tests
Test written in python
OTT provides common API to Modelica / FMI tools
Provides support for report generation
Report step responsible for processing and producing results
Different reporters
HTML, for humans
JUNIT, xml for computers
Pickle, used for reconstructing and general post processing
With the junit xml it is possible to integrate to build systems
Examples
slide
Usability
Easy:
Modifications
Compare variables
Review, update result
Test case inheritance
Finalize new test specification format
Finalize run configuration format
Leaving traceability files after run, which contains version history and run configuration
More GUI
Support more tools