slides here


Published on

1 Comment
  • no puedo subir mi trabajo, me falta algo que no hice?
    Are you sure you want to  Yes  No
    Your message goes here
  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Used in accident investigation to identify the path of events and contributing factors leading to an accident Systematically explore all possible scenarios leading to the accident based on a formal system model According to information provided by Events and Causal Factors Analysis.
  • A marking tree can be used to identify the set of reachable states provided that some information is given about the initial status or ‘marking’ of the system. This technique helps to produce what can be thought of as a form of state transition diagram. Several tools are currently available to support this analysis. However, this style of analysis can impose considerable overheads when considering complex systems such as those involved in our case study.
  • Next Generation control rooms and even near term advanced control rooms will incorporate changes that will push the paradigm of current reactor plant operations. PBMR is shown because it is probabaly the closest nearterm example of a truly advanced control room. The V HTR illustrates one concept that ties hydroden production to advanced reactors. It is not the only such ecample. These changes include: - the migration / upgrade from hybrid control systems to purely digital controls. - The incorporation of hydrogen production with or instead of electricity productions. - Increased levels of automation to permit the controlling of multiple reactors by one crew. This paper and presentation are the result of a workshop held in April of this year that looked at human performance issues for advanced control rooms.
  • It was necessary for the purposes of our approach to have an overall perspective of how different factors and conditions, such as pressure, temperature and different system states evolved throughout time towards leading to this accident. Events and Causal Factors Analysis (ECFA), does exactly the above; in a chronologically sequential representation of events, ECFA can provide the analyst an overall picture of how different factors relate to one another and how they contributed to an accident. In brief events are represented as rectangles and conditions as ovals. Events are connected with arrows, while conditions with dashed arrows. If an event or condition is an assumption, then it is represented with a dashed rectangle or oval accordingly. Finally, based on chronological sequence, events are connected moving from left to right, keeping the main events on a primary horizontal line, and contributing or secondary events and conditions, above or below. Due to space constraints, we cannot present the ECFA diagram, however it can be found at
  • A seal on the north grinder overheated. The kiln control operator and supervisor decided to switch waste fuel delivery systems from north to south. The worker switched delivery systems; however fuel did not flow to the plant kilns as planned. The personnel believed the problem was due to air being trapped in the south fuel pipes. They, therefore, bled the valves of the south system while the motors were running. In the meantime, due to the low pressure being sensed in the fuel lines, the automatic step increase program was increasing the speed of the motors on the south pumps in an attempt to increase pressure in the fuel line. These various factors combined to create a ‘fuel hammer effect’ in the pipe feeding the south pump. The hammer effect is caused by rebound waves created in a pipe full of liquid when the valve is closed too quickly. The waves of pressure converged on the south grinder.
  • slides here

    1. 1. Testing Interactive Software: A Challenge for Usability and Reliability Special Interest Group – CHI 2006 – Montréal – 22nd April 2006 Philippe Palanque LIIHS-IRIT, University Toulouse 3, 31062 Toulouse, France [email_address] Regina Bernhaupt ICT&S-Center, Universität Salzburg 5020 Salzburg, Austria Ronald Boring Idaho National Laboratory Idaho Falls 83415, Idaho, USA [email_address] Chris Johnson Dept. of Computing Science, University of Glasgow, Glasgow, G12 8QQ, Scotland [email_address] Sandra Basnyat LIIHS-IRIT, University Toulouse 3, 31062 Toulouse, France [email_address]
    2. 2. Outline of the SIG <ul><li>Short introduction about the SIG (10 mn) </li></ul><ul><li>Short presentations (20 mn) </li></ul><ul><ul><li>Software engineering testing for reliability (Philippe) </li></ul></ul><ul><ul><li>Human reliabilty for interactive systems testing (Ron) </li></ul></ul><ul><ul><li>Incident and accident analysis and reporting for testing (Sandra) </li></ul></ul><ul><ul><li>HCI testing for usability (Regina) </li></ul></ul><ul><li>Gathering feedback from audience (10 mn) </li></ul><ul><li>Presentation of some case studies (20 mn) </li></ul><ul><li>Listing of issues and solutions for interactive systems testing (20 mn) </li></ul><ul><li>Discussion and summary (10 mn) </li></ul>
    3. 3. Introduction <ul><li>What are interactive applications </li></ul><ul><li>What is interactive applications testing </li></ul><ul><ul><li>Coverage testing </li></ul></ul><ul><ul><li>Non regression testing </li></ul></ul><ul><li>Usability versus reliability </li></ul><ul><ul><li>What about usability testing of a non reliable interactive application </li></ul></ul><ul><ul><li>What about reliable applications with poor usability </li></ul></ul>
    4. 4. Interactive Systems
    5. 5. A paradigm switch <ul><li>Control flow is in the hands of the user </li></ul><ul><li>Interactive application idle waiting for input from the users </li></ul><ul><li>Code is sliced </li></ul><ul><li>Execution influenced by internal and external states </li></ul><ul><li>Nothing new but … </li></ul>
    6. 6. Classical Behavior Read Input Exit ? End Read Input Process Input Process Input Exit ?
    7. 7. Event-based Functioning Application Window Manager States At startup Get next event Dispatch event Register Event Handlers Call Window Manager Finished Event Handler 1 Event Handler 2 Event Handler n EH Registration Event Queue At runtime Ack received Wait for next event
    8. 8. Safety Critical Interactive Systems <ul><li>Safety Critical Systems </li></ul><ul><ul><li>Software Engineers </li></ul></ul><ul><ul><li>System centered </li></ul></ul><ul><ul><li>Reliability </li></ul></ul><ul><ul><li>Safety requirements (certification) </li></ul></ul><ul><ul><li>Formal specification </li></ul></ul><ul><ul><li>Verification / Proof </li></ul></ul><ul><ul><li>Waterfall model / structured </li></ul></ul><ul><ul><li>Archaic interaction techniques </li></ul></ul><ul><li>Interactive Systems </li></ul><ul><ul><li>Usability experts </li></ul></ul><ul><ul><li>User centered </li></ul></ul><ul><ul><li>Usability </li></ul></ul><ul><ul><li>Human factors </li></ul></ul><ul><ul><li>Task analysis & modeling </li></ul></ul><ul><ul><li>Evaluation </li></ul></ul><ul><ul><li>Iterative process / Prototyping </li></ul></ul><ul><ul><li>Novel Interaction techniques </li></ul></ul>Reliability & Usability
    9. 9. Some Well-known Examples (1/2)
    10. 10. Some Well-known Examples
    11. 11. The Shift from Reliability to Fault-Tolerance <ul><li>Failures will occur </li></ul><ul><li>Mitigate failures </li></ul><ul><li>Reduce the impact of a failure </li></ul><ul><li>A small demo … </li></ul>
    12. 12. Informal Description of a Civil Cockpit application <ul><li>The working mode </li></ul><ul><li>The tilt selection mode: AUTO or MANUAL (AUTO) The CTRL push-button allows to swap between the two modes </li></ul><ul><li>The stabilization mode: ON or OFF The CTRL push-button allows to swap between the two modes The access to the button is forbidden when in AUTO tilt selection mode </li></ul><ul><li>The tilt angle: a numeric edit box permits to select its value into range [-15°; 15°] </li></ul><ul><li>Modifications are forbidden when in AUTO tilt selection mode </li></ul>
    13. 13. Various perspectives of this Special Interest Group <ul><li>Software engineering testing for reliability </li></ul><ul><li>Human reliability testing </li></ul><ul><li>Incident and accident analysis and reporting for testing </li></ul><ul><li>HCI testing for usability </li></ul>
    14. 14. What do we mean by human error? Consequence: Inconvenience Consequence: Danger
    15. 15. Conceptualizing error <ul><li>Humans are natural “error emitters” </li></ul><ul><ul><li>On average we make around 5-6 errors every hour </li></ul></ul><ul><ul><li>Under stress and fatigue that rate can increase dramatically </li></ul></ul><ul><li>Most errors are inconsequential or mitigated </li></ul><ul><ul><li>No consequences or impact from many mistakes made </li></ul></ul><ul><ul><li>Where there may consequences, many times defenses and recovery mechanisms prevent serious accidents </li></ul></ul>
    16. 16. Human Reliability Analysis (HRA) <ul><li>Classic Definition </li></ul><ul><ul><li>The use of systems engineering and human factors methods in order to render a complete description of the human contribution to risk and to identify ways to reduce that risk </li></ul></ul><ul><li>What’s Missing </li></ul><ul><ul><li>HRA can be used to predict human performance issues and to identify human contributions to incidents before they occur </li></ul></ul><ul><ul><li>Can be used to design safe and reliable systems </li></ul></ul>
    17. 17. Performance Shaping Factors (PSFs) <ul><li>Are environmental, personal, or task-oriented factors that influence the probability of human error </li></ul><ul><li>Are an integral part of error modeling and characterization </li></ul><ul><li>Are evaluated and used during quantification to obtain a human error rate applicable to a particular set of circumstances </li></ul><ul><ul><ul><li>Specifically, the basic human error probabilities obtained for generic circumstances are modified (adjusted) per the specific situation </li></ul></ul></ul>
    18. 18. Example: SPAR-H PSFs
    19. 19. Maximizing Human Reliability <ul><li>Increasingly, human reliability needs to go beyond being a </li></ul><ul><li>diagnostic tool to become a prescriptive tool </li></ul><ul><li>NRC and nuclear industry are looking at new designs for control rooms and want plants designed with human reliability in mind, not simply verified after the design is completed </li></ul><ul><li>NASA has issued strict Human-Rating Requirements (NPR 8705.2) that all space systems designed to come in contact with humans must demonstrate that they impose minimal risk, they are safe for humans, and they maximize human reliability in the operation of that system </li></ul><ul><li>How do we make reliable human systems? </li></ul><ul><li>Design </li></ul><ul><li>Test </li></ul><ul><li>Model </li></ul>} “classic” human factors } human reliability analysis
    20. 20. Best Achievable Practices for HR <ul><li>The Human Reliability Design Triptych </li></ul>
    21. 21. Concluding Thoughts <ul><li>Human error is ubiquitous </li></ul><ul><li>Pressing need to design ways to prevent human error </li></ul><ul><li>Impetus comes from safety-critical systems </li></ul><ul><ul><li>Lessons learned from safety-critical systems potentially apply across the board, even including designing consumer software that is usable </li></ul></ul><ul><li>Designing for human reliability requires merger of two fields </li></ul><ul><ul><li>Human factors/HCI for design and testing </li></ul></ul><ul><ul><li>Human reliability for modeling </li></ul></ul>
    22. 22. Incidents and Accidents as a Support for Testing <ul><li>Aim, contribute to a design method for safer safety-critical interactive systems </li></ul><ul><li>Inform a formal system model </li></ul><ul><li>Ultimate goals </li></ul><ul><ul><li>Embedding reliability, usability, efficiency and error tolerance within the end product </li></ul></ul><ul><ul><li>While ensuring consistency between models </li></ul></ul>
    23. 23. The Approach (1/2) <ul><li>Address the issue of system redesign after the occurrence of an incident or accident </li></ul><ul><li>2 Techniques </li></ul><ul><ul><li>Events and Causal Factors Analysis </li></ul></ul><ul><ul><li>Marking Graphs extracted from a system model </li></ul></ul><ul><li>2 Purposes </li></ul><ul><ul><li>Ensure current system model accurately models the sequence of events that led to the accident </li></ul></ul><ul><ul><li>Reveal further scenarios that could eventually lead to similar adverse outcomes </li></ul></ul>
    24. 24. The Approach (2/2) Incident & accident investigation part Part of the whole process System design part Accident Report Safety - Case Analysis Model The System ECF Analysis Re - model The System Formal ICO System Model Including Erroneous Events Marking Graph Analysis Re - Design System Model to make Accident Torlerant Extraction of Relevant Scenarios
    25. 25. ECFA Chart of the Accident
    26. 26. Marking Trees & Graphs <ul><li>Marking Tree – identify the entire set of reachable states </li></ul><ul><ul><li>Is a form of state transition diagram </li></ul></ul><ul><ul><li>Analysis support tools available </li></ul></ul><ul><ul><li>However, can impose considerable overheads when considering complex systems such as those in case study </li></ul></ul>
    27. 27. The Approach Not Simplified
    28. 28. Usability Evaluation Methods (UEM) <ul><li>UEMs conducted by experts </li></ul><ul><ul><li>Usability Inspection Methods, Guideline Reviews, … </li></ul></ul><ul><ul><li>Any type of interactive systems </li></ul></ul><ul><li>UEMs involving the user </li></ul><ul><ul><li>Empirical evaluation, Observations, … </li></ul></ul><ul><ul><li>Any type of interactive systems (from low-fi prototypes to deployed applications) </li></ul></ul>
    29. 29. Usability Evaluation Methods (UEM) <ul><li>Computer supported UEMs </li></ul><ul><ul><li>Automatic testing based on guidelines, … </li></ul></ul><ul><ul><li>Task models-based evaluations, metrics-based evaluation, … </li></ul></ul><ul><ul><li>Applications with standardized interaction techniques (Web, WIMP) </li></ul></ul>
    30. 30. Issues of Reliability and Usability <ul><li>Testing the usability of a non reliable system? </li></ul><ul><li>Constructing reliable systems without concerning usability? </li></ul><ul><li>Possible ways to enhance, extend, enlarge UEMs to address these needs? </li></ul>
    31. 31. Gathering feedback from the audience through case studies <ul><li>Do we need to integrate methods OR develop new methods ? </li></ul><ul><ul><li>In favor of integration </li></ul></ul><ul><ul><ul><li>Joint meetings (including software developers) through brainstorming + rapid prototyping (more problems of non usable reliable systems) </li></ul></ul></ul><ul><ul><li>Problems </li></ul></ul><ul><ul><ul><li>Some issues are also related to system reliability (ATMs) problem of testing a prototype versus testing the system </li></ul></ul></ul><ul><li>Issues of development time rather than application type </li></ul><ul><li>Application type has an impact of the processes selected for development </li></ul><ul><li>Don’t know how to build a reliable interactive system … whatever time we have </li></ul><ul><li>How can reliablity-oriented methods support usability-oriented methods </li></ul>
    32. 32. Gathering feedback from the audience through case studies <ul><li>How to design for testability (both the reliability of the software and the usability) </li></ul><ul><li>Is testing enough or do we need proof </li></ul><ul><li>Usability testing is at higher level of abstraction (goal oriented) while software testing is at lower level (functions oriented) </li></ul><ul><li>Is there an issue with interaction techniques (do we need precise description of interaction techniques and is it useful for usability testing?) </li></ul><ul><li>Automated testing through user-events simulation (how to understand how the user can react to that?) </li></ul><ul><li>Issue of reliability according to the intention of the user? and not only the reliability of the system per se </li></ul><ul><ul><li>Beyond one instance of use but on reproducing the use many times </li></ul></ul>
    33. 33. Gathering feedback from the audience and case studies <ul><li>Control Room (Ron) </li></ul><ul><li>Home/Mobile – testing in non traditional environments (Regina) </li></ul><ul><li>Mining case study (Sandra) </li></ul>
    34. 34. First Case Study: Control Room
    35. 35. Advanced Control Room Design Transitioning to new domains of Human System Interaction Problem: Next generation nuclear power plants coupled with advanced instrumentation and controls (I&C), increased levels of automation and onboard intelligence all coupled with large-scale hydrogen production present unique operational challenges. PBMR Conceptual design Typical Design Hybrid Controls
    36. 36. Example <ul><li>Software Interface with: </li></ul><ul><li>Cumbersome dialog box </li></ul><ul><li>No discernible exits </li></ul><ul><li>Good shortcuts </li></ul>
    37. 37. Example 10 1 1 1 10 .1 1 1 1 0.1 UCC = 0.1 x 2 = 0.2
    38. 38. Second Case Study: Mobile interfaces
    39. 39. Testing Mobile Interfaces <ul><li>Lab or field </li></ul><ul><li>Method selection </li></ul><ul><li>Data gathering/ analysis </li></ul><ul><li>Problematic Area: Testing in non traditional environment </li></ul>
    40. 40. Non Traditional Environments <ul><li>Combine and balance different UEMs according to usability/reliability issues </li></ul><ul><li>Combine Lab and Field </li></ul><ul><li>Select UEMs according to development phase </li></ul>
    41. 41. Third Case Study: Mining Accident
    42. 42. Reminder
    43. 43. Events & Causal Factors Analysis (ECFA) <ul><li>Provides scenario of events and causal factors that contributed to the accident </li></ul><ul><ul><li>Chronologically sequential representation </li></ul></ul><ul><ul><li>Provides overall picture </li></ul></ul><ul><ul><li>Relation between factors </li></ul></ul><ul><li>Gain overall perspective of </li></ul><ul><ul><li>Casual factors such as conditions (pressure, temperature…), evolution of system states </li></ul></ul>
    44. 44. Analysing the accident <ul><li>Fatal mining accident involving human operators, piping system & control system </li></ul><ul><li>Decided to switch from North to South </li></ul><ul><li>Fuel didn’t arrive to plant kilns </li></ul><ul><li>Bled pipes while motors in operation </li></ul><ul><li>Motor speed auto-increase due to low pressure </li></ul><ul><li>Fuel hammer effect </li></ul><ul><li>Grinder exploded </li></ul>
    45. 45. ECFA Chart of the Accident
    46. 46. Listing of issues and solutions for interactive systems testing
    47. 47. <ul><li>Hybrid methods (Heuristic evaluation refined (prioritisation of Heuristics)) </li></ul><ul><li>Remote usability testing </li></ul><ul><li>Task analysis + system modelling </li></ul><ul><li>Cognitive walkthrough (as is) </li></ul>
    48. 48. Towards Solutions <ul><li>Formal models for supporting usability testing </li></ul><ul><li>Formal models for incidents and accidents analysis </li></ul><ul><li>Usability and human reliability analysis </li></ul>
    49. 49. Usability Heuristics <ul><li>Heuristics are key factors that comprise a usable interface (Nielsen & Molich, 1990) </li></ul><ul><li>Useful in identifying usability problems </li></ul><ul><li>Obvious cost savings for developers </li></ul><ul><li>9 heuristics identified for use in the present study </li></ul><ul><li>In our framework, these usability heuristics are used as </li></ul><ul><li>“ performance shaping factors” to constitute a usability error probability (UEP) </li></ul>
    50. 50. Heuristic Evaluation and HRA “ Standard” heuristic evaluation HRA-based heuristic evaluation
    51. 51. Heuristic Evaluation Matrix <ul><li>Steps </li></ul><ul><li>Determine level of heuristic </li></ul><ul><li>Determine product of heuristic multipliers </li></ul><ul><li>Multiply product by nominal error rate </li></ul>
    52. 52. Consequence Determination <ul><li>Strict consequence assignment in PRA/HRA, part of cut sets approach </li></ul><ul><li>More molar approach taken in the present study </li></ul><ul><li>“ Likely effect of usability problem on usage” </li></ul><ul><ul><li>Not literal consequence model </li></ul></ul><ul><li>Results in usability consequence coefficient (UCC) </li></ul><ul><li>Four consequence levels assigned </li></ul><ul><ul><li>high, medium, low, and none </li></ul></ul>
    53. 53. Usability Consequence Matrix <ul><li>Steps </li></ul><ul><li>Determine level of usability consequence </li></ul><ul><li>Multiply UEP by consequence Multiplier </li></ul><ul><li>Usability Consequence Coefficient determines priority of fix </li></ul>
    54. 54. Example <ul><li>Software Interface with: </li></ul><ul><li>Cumbersome dialog box </li></ul><ul><li>No discernible exits </li></ul><ul><li>Good shortcuts </li></ul>
    55. 55. Example 10 1 1 1 10 .1 1 1 1 0.1 UCC = 0.1 x 2 = 0.2
    56. 56. Listing of issues and solutions for new interaction techniques testing
    57. 57. Automated autonomous Real-Time Systems (VAL, TCAS) B (Atelier B), Z, … No Interaction Technique WIMP - hierarchical Direct Manipulation All Types of Applications Web Applications Business Applications UML, E/R, … Mobile phones Roadmap on Testing Interactive Systems Target Applications, Domains - context Software Engineering Issues Notations and Tools User Interface Interaction Technique No more usability problems ? No more bugs ? Augmented Reality Command and Control Systems Tangible User Interface 2006 2020 TODAY 2009 Multimodal Interaction <ul><li>Full concurrency </li></ul><ul><li>Dynamic instantiation </li></ul><ul><li>Hardware/Software </li></ul><ul><li>Infinite number of states </li></ul><ul><li>Tool support </li></ul><ul><li>Advanced Analysis techniques </li></ul>Embodied UI Mobile systems Web systems Gaming
    58. 58. Future Plans and Announcements <ul><li>Future plans </li></ul><ul><ul><li>Web site is setup and will be populated (slides, list of attendees, topics, …) </li></ul></ul><ul><ul><li>Further work </li></ul></ul><ul><ul><ul><li>IFIP WG 13.5 on Human Error Safety and System Developement [email_address] </li></ul></ul></ul><ul><ul><ul><li>NoE ResIST (Resilience for IST) </li></ul></ul></ul><ul><ul><ul><li>Workshop on Testing in Non-Traditional Environments at CHI 2006 </li></ul></ul></ul><ul><ul><ul><li>MAUSE: </li></ul></ul></ul><ul><li>Announcements </li></ul><ul><ul><li>DSVIS 2006, HCI Aero, HESSD next year </li></ul></ul>
    59. 63. Best Achievable Practices for HR <ul><li>The Human Reliability Design Triptych </li></ul>
    60. 64. Best Practices for Design <ul><li>Compliance with applicable standards and best practices documents </li></ul><ul><ul><li>Where applicable, ANSI, ASME, IEEE, ISO, or other discipline-specific standards and best practices should be followed </li></ul></ul><ul><li>Consideration of system usability and human factors </li></ul><ul><ul><li>System should be designed according to usability and human factors standards such as NASA-STD-3000, MIL-STD-1472, or ISO </li></ul></ul><ul><li>Iterative design-test-redesign-retest cycle </li></ul><ul><li>Tractability of design decisions </li></ul><ul><ul><li>Where decisions have been made that could affect the functions of the system, these decisions should be clearly documented </li></ul></ul><ul><li>Verified reliability of design solutions </li></ul><ul><ul><li>Reliability of systems should be documented through vendor data, cross-reference to the operational history of similar existing systems, and/or test results. </li></ul></ul><ul><ul><li>It is especially important to project system reliability throughout the system lifecycle, including considerations for maintenance once the system has been deployed </li></ul></ul><ul><ul><li>It is also important to incorporate the estimated mean time before failure into the estimated life of the system </li></ul></ul>
    61. 65. Best Practices for Testing <ul><li>Controlled studies that avoid confounds or experimental artifacts </li></ul><ul><ul><li>Testing may include hardware reliability testing, human-system interaction usability evaluation, and software debugging </li></ul></ul><ul><li>Use of maximally realistic and representative scenarios, users, and/or conditions </li></ul><ul><ul><li>Testing scenarios and conditions should reflect the range of actions the system will experience in actual use, including possible worst-case situations </li></ul></ul><ul><li>Use of humans-in-the-loop testing </li></ul><ul><ul><li>A system that will be used by humans should always be tested by humans </li></ul></ul><ul><li>Use of valid metrics such as statistically significant results for acceptance criteria </li></ul><ul><ul><li>Where feasible, the metrics should reflect system or user performance across the entire range of expected circumstances </li></ul></ul><ul><ul><li>In many cases, testing will involve use of a statistical sample evaluated against a pre-defined acceptance (e.g., alpha) level for “passing” the test </li></ul></ul><ul><li>Documented test design, hypothesis, manipulations, metrics, and acceptance criteria </li></ul><ul><ul><li>Should include the test design, hypothesis (or hypotheses), manipulations, metrics, and acceptance criteria </li></ul></ul>
    62. 66. Best Practices for Modeling <ul><li>Compliance with applicable standards and best practices documents </li></ul><ul><ul><li>E.g., NASA NPR 8705.5, Probabilistic Risk Assessment (PRA) Procedures for NASA Programs and Projects or NRC NUREG-1792, Good Practices for Implementing Human Reliability Analysis </li></ul></ul><ul><li>Use of established modeling techniques </li></ul><ul><ul><li>It is better to use an existing, vetted method than to make use of novel techniques and methods that have not been established </li></ul></ul><ul><li>Validation of models to available operational data </li></ul><ul><ul><li>To ensure a realistic modeling representation, models must be baselined to data obtained from empirical testing or actual operational data </li></ul></ul><ul><ul><li>Such validation increases the veracity of model extrapolations to novel domains </li></ul></ul><ul><li>Completeness of modeling scenarios at the correct level of granularity </li></ul><ul><ul><li>A thorough task analysis, a review of relevant past operating experience, and a review by subject matter experts help to ensure the completeness of the model </li></ul></ul><ul><ul><li>The appropriate level of task decomposition or granularity should be determined according to the modeling method’s requirement, the fidelity required to model success and failure outcomes, and specific requirements of the system that is being designed </li></ul></ul><ul><li>Realistic model end states </li></ul><ul><ul><li>E nd states should reflect reasonable and realistic outcomes across the range of operating scenarios </li></ul></ul>