1
Independent Formal Verification of Safety-
Critical Systems’ User Interfaces:
a space system case study
NASA IVV Worksho...
2
foreword
Dependable device
Dependable system?
User
 The impact of users on a system is hard to anticipate
 users behav...
3
Verification
Potential
Problem
Analysis
Modelling
Properties
Model
our approach
4
systematic analysis – the process
1. Models – system as designed
2. Property patterns – good design practices
3. Verific...
5
systematic analysis – the process
Scenarios
Prototype Model
Traces
abstraction
deployment
analysis
verification
Expert a...
6
the IVY tool
7
MAL interactors
interactor MCP
aggregates
dial(Altitude) via ALTDial
dial(ClimbRate) via crDial
dial(Velocity) via asDia...
8
9
IAE’s PTGS
MN
Subsystem
SC
Subsystem
EV
Subsystem
PR
Subsystem
CR
Subsystem
Telemetry Time
Synchronization
Operators Bat...
10
IVY analysis of EV subsystem
 The system was modelled from the operations manual
 model reflects knowledge provided t...
11
From manual to model
How the colouring scheme works (from the operations manual):
“Blinking yellow: For a critical vari...
12
From manual to model
How the colouring scheme works (from the operations manual):
“Blinking yellow: For a critical vari...
13
From manual to model
How the colouring scheme works (from the operations manual):
“Blinking yellow: For a critical vari...
14
From manual to model
[setValue(_v)]
(((critical & ((_v>= infAlarmLim& _v<infAlertLim) | (_v<= supAlarmLim
& _v>supAlert...
15
From manual to model
(alarmState != AlaNRec&alarmState != AlaRec) becomes
(alarmState = AlaRec)
How the colouring schem...
16
the model
17
analysis
 Can a variable be in alarm?
 Trying to prove otherwise…
 False as expected but…
 counterexample highlight...
18
analysis
 manual not stating what happens to a non-critical alert
 model becomes non-deterministic
19
conclusions/lessons learnt
 It was possible to build a relevant model independently (without a deep
understanding of t...
20
Thank you!
jose.campos@di.uminho.pt
Upcoming SlideShare
Loading in …5
×

Independent Formal Verification of Safety-Critical Systems’ User Interfaces: a space system case study

350 views

Published on

Talk at NASA IV&V workshop 2013

Published in: Business, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
350
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • explain where we come from…Our starting point is that when you place a dependable device in from of the user, you are not guaranteed to have a dependable systemusers unexpected: firing a gunusers adapt: langing gear
  • Explain the basic ideasTypical pattern: consistency
  • Explain the basic ideas
  • In this paper:AniMALDiscuss representationsScenarios generator
  • PTGS = Preparation and Testing Ground System EV = Flight Events Sequence Network. responsible to test and prepare one of the rocket’s electrical sub-network. CR = Electric Control Network. responsible for the testing, simulation and analysis of the automatic launch sequence,
  • atributoseacções das imagensaxiomas das descrições
  • EV subsystemmain because of the navigation and constraints to synchronize all the Interactors, tmtVariablesbecause of the values of variables and control of the alarms and alerts triggered.
  • esforço
  • A Macbook Pro with an Intel Core 2 Duo P8800 at 2.66 GHz with 8Gb of ram, and a PC with an Intel Core i7 960 at 3.20GHz with 24Gb of ram. The machines have different operating system, Mac OS X and Windows Server 2008 R2 Standard respectively.
  • A Macbook Pro with an Intel Core 2 Duo P8800 at 2.66 GHz with 8Gb of ram, and a PC with an Intel Core i7 960 at 3.20GHz with 24Gb of ram. The machines have different operating system, Mac OS X and Windows Server 2008 R2 Standard respectively.
  • Independent Formal Verification of Safety-Critical Systems’ User Interfaces: a space system case study

    1. 1. 1 Independent Formal Verification of Safety- Critical Systems’ User Interfaces: a space system case study NASA IVV Workshop September, 2013 Manuel Sousa1, José Creissac Campos1 Miriam Alves2 and Michael D. Harrison3 1Dept. Informática/Universidade do Minho &HASLab/INESC TEC, Portugal 2Institute of Aeronautics and Space - IAE, São José dos Campos, Brazil 3Queen Mary University of London & Newcastle University, UK * This work is funded by the ERDF - European Regional Development Fund through the ON.2 – O Novo Norte Operational Programme, within the Best Case project (ref. N-01-07-01-24-01-26).
    2. 2. 2 foreword Dependable device Dependable system? User  The impact of users on a system is hard to anticipate  users behave in unexpected ways  users’ behaviour is changed by (adapts to) the device  users must understand the device  We have been working on approaches to consider the user during the formal verification of interactive systems
    3. 3. 3 Verification Potential Problem Analysis Modelling Properties Model our approach
    4. 4. 4 systematic analysis – the process 1. Models – system as designed 2. Property patterns – good design practices 3. Verification – models against patterns 4. Traces – when verification fails  Analysis  Scenarios  Prototyping 5. Change models – reflecting analysis findings
    5. 5. 5 systematic analysis – the process Scenarios Prototype Model Traces abstraction deployment analysis verification Expert analysis Domain knowledge Property Templates Prototype redesign betterdesign Expert usability inspection Universal properties /Heuristics simulation refinement walkthroughs techniques process information flow
    6. 6. 6 the IVY tool
    7. 7. 7 MAL interactors interactor MCP aggregates dial(Altitude) via ALTDial dial(ClimbRate) via crDial dial(Velocity) via asDial attributes [vis] pitchMode: PitchModes [vis] ALT: boolean actions [vis] enterVSenterIASenterAHtoggleALT enterAC axioms [asDial.set(t)]effect(enterIAS) [crDial.set(t)]effect(enterVS) [ALTDial.set(t)] ensure_ALT_is_set [enterVS] pitchMode'=VERT_SPD & ALT'=ALT [enterIAS] pitchMode'=IAS & ALT'=ALT [enterAH] pitchMode'=ALT_HLD & ALT'=ALT [toggleALT] pitchMode'=pitchMode& ALT'=!ALT [enterAC] pitchMode'=ALT_CAP & !ALT' Behaviour
    8. 8. 8
    9. 9. 9 IAE’s PTGS MN Subsystem SC Subsystem EV Subsystem PR Subsystem CR Subsystem Telemetry Time Synchronization Operators Batteries Operational Communication Operational Signalization INTERFACES UMBILICAL CORDS ROCKET PW Subsystem
    10. 10. 10 IVY analysis of EV subsystem  The system was modelled from the operations manual  model reflects knowledge provided to the operator  properties used to express expected behaviour  A three layered model was built  Each type of variable modelled as an interactor  Each screen modelled as an interactor  Navigation between screens modelled on top of that  Values displayed modelled as attributes  Buttons modelled as actions
    11. 11. 11 From manual to model How the colouring scheme works (from the operations manual): “Blinking yellow: For a critical variable, when the current value of the variable is in non acknowledged alert (value within the alert range), there is no acknowledged alarm in the variable, and the previous criterion [non acknowledged alarm criterion] is not satisfied. If over the same critical variable an acknowledged alarm exists, then Fixed Red prevails. For a non critical variable, when the current value of the variable is in non acknowledged alarm (value within the alarm range).”
    12. 12. 12 From manual to model How the colouring scheme works (from the operations manual): “Blinking yellow: For a critical variable, when the current value of the variable is in non acknowledged alert (value within the alert range), there is no acknowledged alarm in the variable, and the previous criterion [non acknowledged alarm criterion] is not satisfied. If over the same critical variable an acknowledged alarm exists, then Fixed Red prevails. For a non critical variable, when the current value of the variable is in non acknowledged alarm (value within the alarm range).” critical & ((_v>= infAlarmLim& _v<infAlertLim) | (_v<= supAlarmLim& _v>supAlertLim)) & (alarmState != AlaRec&alarmState != AlaNRec)
    13. 13. 13 From manual to model How the colouring scheme works (from the operations manual): “Blinking yellow: For a critical variable, when the current value of the variable is in non acknowledged alert (value within the alert range), there is no acknowledged alarm in the variable, and the previous criterion [non acknowledged alarm criterion] is not satisfied. If over the same critical variable an acknowledged alarm exists, then Fixed Red prevails. For a non critical variable, when the current value of the variable is in non acknowledged alarm (value within the alarm range).” !critical & ((_v<infAlarmLim) | (_v>supAlarmLim))
    14. 14. 14 From manual to model [setValue(_v)] (((critical & ((_v>= infAlarmLim& _v<infAlertLim) | (_v<= supAlarmLim & _v>supAlertLim))) | (!critical & ((_v<infAlarmLim) | (_v> supAlarmLim)))) & (alarmState != AlaRec&alarmState != AlaNRec)) -> value’ = _v&colour’ = yellow & error’ = Lim &alertState’ = AleNRec& characteristic’ = Blink &keep(supAlertLim,infAlertLim,supAlarmLim, infAlarmLim,unity,critical,alarmState) conditions for blinking yellow setting blinking yellow How the colouring scheme works (from the operations manual): “Blinking yellow: For a critical variable, when the current value of the variable is in non acknowledged alert (value within the alert range), there is no acknowledged alarm in the variable, and the previous criterion [non acknowledged alarm criterion] is not satisfied. If over the same critical variable an acknowledged alarm exists, then Fixed Red prevails. For a non critical variable, when the current value of the variable is in non acknowledged alarm (value within the alarm range).”
    15. 15. 15 From manual to model (alarmState != AlaNRec&alarmState != AlaRec) becomes (alarmState = AlaRec) How the colouring scheme works (from the operations manual): “Blinking yellow: For a critical variable, when the current value of the variable is in non acknowledged alert (value within the alert range), there is no acknowledged alarm in the variable, and the previous criterion [non acknowledged alarm criterion] is not satisfied. If over the same critical variable an acknowledged alarm exists, then Fixed Red prevails. For a non critical variable, when the current value of the variable is in non acknowledged alarm (value within the alarm range).”
    16. 16. 16 the model
    17. 17. 17 analysis  Can a variable be in alarm?  Trying to prove otherwise…  False as expected but…  counterexample highlights a situation where the variable colour is fixed red under an acknowledged alert condition – should not be possible. AG(monitTMT.BD1_A.colour = green -> !EX (monitTMT.BD1_A.colour = red))
    18. 18. 18 analysis  manual not stating what happens to a non-critical alert  model becomes non-deterministic
    19. 19. 19 conclusions/lessons learnt  It was possible to build a relevant model independently (without a deep understanding of the system) and still provide insights to the client  This particular model captures understanding of the system from the operations manual/requirements document perspective  Incomplete or inconsistent information leads to unexpected system behaviour  Computer-aided verification of user interfaces is crucial for critical- complex systems  Results can help:  improve requirements / manuals  define test cases  improve system dependability  As we add complexity to the models, verification time becomes a problem – but, interesting results are possible with manageable models
    20. 20. 20 Thank you! jose.campos@di.uminho.pt

    ×