<ul><li>Michael Hogarth, MD </li></ul>Building Software: An Artful Science
Software development is risky <ul><li>IBM’s Consulting Group survey : </li></ul><ul><ul><li>55% of the software developed ...
Standish Group report 2006 <ul><li>19% of projects were outright failures </li></ul><ul><li>35% could be categorized as su...
McDonald’s gets McFried <ul><li>McDonald’s “Innovate” Project </li></ul><ul><ul><li>$500 million spent for nothing.... </l...
FBI’s “Virtual Case File” <ul><li>2003 - Virtual Case File - networked system for tracking criminal cases </li></ul><ul><l...
Causes of the VCF Failure <ul><li>Changing requirements (conceived before 9/11, after 9/11 requirements were altered signi...
Washington State Licensing Dept <ul><li>1990 - Washington State License Application Mitigation Project </li></ul><ul><li>$...
J Sainsbury IT failure <ul><li>UK food retailer, J. Sainsbury, invested in an automated supply-chain management system </l...
Other IT nightmares <ul><li>1999 - $125 million NASA Mars Climate Orbiter lost in space due to a data conversion error... ...
does it really matter?
Software bugs can kill... http://www.wired.com/software/coolapps/news/2005/11/69355
When users inadvertently cause disaster http://www.wired.com/software/coolapps/news/2005/11/69355?currentPage=2
How does this happen? <ul><li>Many of the runaway projects are ‘overly ambitious’ -- a major issue (senior management has ...
Causes of failure <ul><li>Project objectives not fully specified -- 51% </li></ul><ul><li>Bad planning and estimating -- 4...
The cost of IT failures <ul><li>2006 - $1 Trillion dollars spent on IT hardware, software, and services worldwide... </li>...
Conclusions <ul><li>IT projects are more likely to be unsuccessful than successful </li></ul><ul><li>Only 1 in 5  software...
Software as engineering <ul><li>Software has been viewed more as “art” than engineering </li></ul><ul><ul><li>has lead to ...
Software Development Lifecycle <ul><li>Domain Analysis </li></ul><ul><li>Software Analysis </li></ul><ul><li>Requirements ...
Software Facts and Figures <ul><li>Maintenance consumes 40-80% of software costs during the lifetime of a software system ...
Software development models <ul><li>Waterfall model </li></ul><ul><ul><li>specification --> development --> testing --> de...
Evolutionary development <ul><li>Exploratory Development </li></ul><ul><ul><li>work with customer/users to explore their r...
Spiral Model  <ul><ul><li>Spiral Model - process that goes through all steps of the software development lifecycle repeate...
Challenges with Evolutionary Development <ul><li>The process is not visible to management -- managers often need regular d...
Agile software development <ul><li>Refers to a group of software development methods that promote iterative development, o...
Agile software methods <ul><li>Scrum </li></ul><ul><li>Crystal Clear </li></ul><ul><li>Extreme Programming </li></ul><ul><...
Scrum  <ul><li>A type of Agile methodology </li></ul><ul><li>Composed of “sprints” that run anywhere from 15-30 days durin...
Scrum and useable software... <ul><li>A key feature of Scrum is the idea that one creates useable software with each itera...
Scrum team roles <ul><li>Pigs and Chickens -- think scrambled eggs and bacon -- the chicken is supportive, but the pig is ...
Adaptive project management <ul><li>Scrum general practices </li></ul><ul><ul><li>customers become part of the development...
Typical Scrum Artifacts <ul><li>Spring Burn Down Chart </li></ul><ul><ul><li>a chart showing the features for that sprint ...
Agile methods and systems <ul><li>Agile works well for small to medium sized projects (around 50,000 - 100,000 lines of so...
Quality assurance <ul><li>The MOST IMPORTANT ASPECT of software development </li></ul><ul><li>Quality Assurance does not s...
Some facts about bugs <ul><li>Bugs in the form of poor requirements gathering or poor communication with programmers is by...
Software testing <ul><li>System Testing </li></ul><ul><ul><li>“ black box” testing </li></ul></ul><ul><ul><li>“ white box”...
Black box testing <ul><li>Treats software as a black-box without knowledge of its interior workings </li></ul><ul><li>It f...
White box testing <ul><li>Tester has knowledge of the internal data structures and algorithms </li></ul><ul><li>Types of w...
Test Plan <ul><li>Outlines the ways in which tests will be developed, the naming and classification for the various failed...
Test cases <ul><li>A description of a specific ‘test’ or interaction to test a single behavior or function in the software...
Components of a test case <ul><li>Name and number for the test case </li></ul><ul><li>The requirement(s) or feature(s) the...
 
Regression testing <ul><li>designed to find ‘software regressions’ -- when previously working functionality is now not wor...
Risk is good.... huh? <ul><li>There is no worthwhile project that has no risk -- risk is part of the game </li></ul><ul><l...
But don’t be blind to risk <ul><li>Sometimes those who are big risk takers have a tendency to emphasize positive thinking ...
Examples of risks <ul><li>BCT.org -- a dependency on externally built and maintained software (caMATCH) </li></ul><ul><li>...
Managing risks <ul><li>What is a risk? -- “a possible future event that will lead to an undesirable outcome” </li></ul><ul...
Managing risks <ul><li>Mitigation - steps you take before the transition or after to make corrections (if possible) or to ...
Common software project risks <ul><li>Schedule flaw - almost always due to neglecting work or minimizing work that is nece...
“Post mortem” evaluations <ul><li>No project is “100% successful” -- they all have problems, some have less than others, s...
Capability Maturity Model (CMM) <ul><li>A measure of the ‘maturity’ of an organization in how they approach projects </li>...
CMM in detail <ul><li>Level 1 - Ad hoc:  -- processes are undocumented and in a state of dynamic change, everything is ‘ad...
Why medical software is hard... Courtesy Dr. Andy Coren, Health Information Technology: A Clinician’s View. 2008
Healthcare IT failures <ul><li>Hard to discover -- nobody airs dirty laundry </li></ul><ul><li>West Virginia -- system has...
Upcoming SlideShare
Loading in …5
×

2008 - Building Software: An Artful Science [ppt]

724 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
724
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
12
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

2008 - Building Software: An Artful Science [ppt]

  1. 1. <ul><li>Michael Hogarth, MD </li></ul>Building Software: An Artful Science
  2. 2. Software development is risky <ul><li>IBM’s Consulting Group survey : </li></ul><ul><ul><li>55% of the software developed cost more that projected </li></ul></ul><ul><ul><li>68% took longer to complete than predicted. </li></ul></ul><ul><ul><li>88% had to be substantially redesigned. </li></ul></ul><ul><li>Standish Group Study of 8,380 software projects (1996): </li></ul><ul><ul><li>31% of software projects were canceled before they were completed </li></ul></ul><ul><ul><li>53% of those are completed cost an average of 189% of their original estimates. </li></ul></ul><ul><ul><li>42% of completed projects - have their original set of proposed features and functions. </li></ul></ul><ul><ul><li>9% - completed on time and on budget. </li></ul></ul>“ To err is human, to really foul things up requires a computer”
  3. 3. Standish Group report 2006 <ul><li>19% of projects were outright failures </li></ul><ul><li>35% could be categorized as successes (better than 1996, but not great) </li></ul><ul><li>46% of projects were “challenged” (either had cost overruns or delays, or both) </li></ul>
  4. 4. McDonald’s gets McFried <ul><li>McDonald’s “Innovate” Project </li></ul><ul><ul><li>$500 million spent for nothing.... </li></ul></ul><ul><ul><li>Objective -- </li></ul></ul><ul><ul><ul><li>“ McDonald's planned to spend $1 billion over five years to tie all its operations in to a real-time digital network. Eventually, executives in company headquarters would have been able to see how soda dispensers and frying machines in every store were perfect. </li></ul></ul></ul><ul><ul><ul><li>Why was it scrubbed? </li></ul></ul></ul><ul><ul><ul><ul><li>“ information systems don't scrub toilets and they don't fry potatoes” </li></ul></ul></ul></ul>Barrett, 2003. http://www.baselinemag.com/c/a/Projects-Supply-Chain/McDonalds-McBusted/
  5. 5. FBI’s “Virtual Case File” <ul><li>2003 - Virtual Case File - networked system for tracking criminal cases </li></ul><ul><li>SAIC spent months writing over 730,000 lines of computer code </li></ul><ul><li>Found to have hundreds of software problems during testing </li></ul><ul><li>$170 million dollar project was cancelled -- SAIC reaped more than $100 million </li></ul><ul><li>Problems </li></ul><ul><ul><li>delayed by over a year. In 2004, the system was 1/10th of the functionality intended and thus largely unusable after $170 spent </li></ul></ul><ul><ul><li>SAIC delivered what FBI requested, the requesting was flawed, poorly planned, not tied to scheduled deliverables </li></ul></ul><ul><li>Now what? </li></ul><ul><ul><li>Lockheed Martin given contract for $305 million tied to benchmarks </li></ul></ul>http://www.washingtonpost.com/wp-dyn/content/article/2006/08/17/AR2006081701485_pf.html
  6. 6. Causes of the VCF Failure <ul><li>Changing requirements (conceived before 9/11, after 9/11 requirements were altered significantly) </li></ul><ul><li>14 different managers over the project lifetime (2 years) </li></ul><ul><li>Poor oversight by the primary ‘owner’ of the project (FBI) - did not oversee construction closely </li></ul><ul><li>Did not pay attention to new, better commercial products -- kept head in the sand because it “had to be built fast” </li></ul><ul><li>Hardware was purchased first, waiting on software (common problem) -- if software is delayed, hardware is “legacy” quickly </li></ul>http://www.inf.ed.ac.uk/teaching/courses/seoc2/2004_2005/slides/failures.pdf
  7. 7. Washington State Licensing Dept <ul><li>1990 - Washington State License Application Mitigation Project </li></ul><ul><li>$41.8 million over 5 years to automate the State’s vehicle registration and license renewal process </li></ul><ul><li>1993 - after $51 million, the original design and requirements were expected to be obsolete when finally built </li></ul><ul><li>1997 - Washington legislature pulled the plug -- $40 million wasted </li></ul><ul><li>Causes </li></ul><ul><ul><li>ambitious </li></ul></ul><ul><ul><li>lack of early deliverables </li></ul></ul><ul><ul><li>development split between in-house and contractor </li></ul></ul>
  8. 8. J Sainsbury IT failure <ul><li>UK food retailer, J. Sainsbury, invested in an automated supply-chain management system </li></ul><ul><li>System did not perform the functions as needed </li></ul><ul><li>As a result, merchandise was stuck in company warehouses and not getting to the stores </li></ul><ul><li>Company added 3,000 additional clerks to stock the shelves manually </li></ul><ul><li>They killed the project after spending $526 million..... </li></ul>“ to err is human, to really foul up requires a root password.” anonymous
  9. 9. Other IT nightmares <ul><li>1999 - $125 million NASA Mars Climate Orbiter lost in space due to a data conversion error... </li></ul><ul><li>Feb 2003 - U.S. Treasury Dept. mailed 50,000 Social Security checks without beneficiary names. Checks had to be ‘cancelled’ and reissued... </li></ul><ul><li>2004-2005 - UK Inland Revenue (IRS) software errors contribute to a $3.45billion tax-credit overpayment </li></ul><ul><li>May 2005 - Toyota had to install a software fix on 20,000 hybrid Prius vehicles due to problems with invalid engine warning lights. It is estimated that the automobile industry spends $2-$3billion/year fixing software problems </li></ul><ul><li>Sept 2006 - A U.S. Government student loan service software error made public the personal data of 21,000 borrowers on it’s web site </li></ul><ul><li>2008 - new Terminal 5 at Heathrow Airport -New automated baggage routing system leads to over 20,000 bags being put in temporary storage... </li></ul>
  10. 10. does it really matter?
  11. 11. Software bugs can kill... http://www.wired.com/software/coolapps/news/2005/11/69355
  12. 12. When users inadvertently cause disaster http://www.wired.com/software/coolapps/news/2005/11/69355?currentPage=2
  13. 13. How does this happen? <ul><li>Many of the runaway projects are ‘overly ambitious’ -- a major issue (senior management has unrealistic expectations of what can be done) </li></ul><ul><li>Most projects failed because of multiple problems/issues, not one. </li></ul><ul><li>Most problems/issues were management related. </li></ul><ul><li>In spite of obvious signs of the runaway software project (72% of project members are aware), only 19% of senior management is aware </li></ul><ul><li>Risk management, an important part of identifying trouble and managing it, was NOT done in any fashion in 55% of major runaway projects. </li></ul>
  14. 14. Causes of failure <ul><li>Project objectives not fully specified -- 51% </li></ul><ul><li>Bad planning and estimating -- 48% </li></ul><ul><li>Technology is new to the organization -- 45% </li></ul><ul><li>Inadequate/no project management methods -- 42% </li></ul><ul><li>Insufficient senior staff on the team -- 42% </li></ul><ul><li>Poor performance by suppliers of software/hardware (contractors) -- 42% </li></ul>http://members.cox.net/johnsuzuki/softfail.htm
  15. 15. The cost of IT failures <ul><li>2006 - $1 Trillion dollars spent on IT hardware, software, and services worldwide... </li></ul><ul><li>18% of all IT projects will be abandoned before delivery (18% of $1 trillion = $180 billion?) </li></ul><ul><li>53% will be delivered late or have cost overruns </li></ul><ul><li>1995 - Standish estimated the U.S. spent $81 billion for cancelled software projects..... </li></ul>
  16. 16. Conclusions <ul><li>IT projects are more likely to be unsuccessful than successful </li></ul><ul><li>Only 1 in 5 software projects bring full satisfaction (succeed) </li></ul><ul><li>The larger the project, the more likely the failure </li></ul>http://www.it-cortex.com/Stat_Failure_Rate.htm# The%20Robbins-Gioia%20Survey%20(2001)
  17. 17. Software as engineering <ul><li>Software has been viewed more as “art” than engineering </li></ul><ul><ul><li>has lead to lack of structured methods and organization for building software systems </li></ul></ul><ul><li>Why is a software development methodology important? </li></ul><ul><ul><li>programmers are expensive </li></ul></ul><ul><ul><li>many software system failures can be traced to poor software development </li></ul></ul><ul><ul><ul><li>requirements gathering is incomplete or not well organized </li></ul></ul></ul><ul><ul><ul><li>requirements are not communicated effectively to the software programmers </li></ul></ul></ul><ul><ul><ul><li>inadequate testing (because testers don’t understand the requirements) </li></ul></ul></ul>
  18. 18. Software Development Lifecycle <ul><li>Domain Analysis </li></ul><ul><li>Software Analysis </li></ul><ul><li>Requirements Analysis </li></ul><ul><li>Specification Development </li></ul><ul><li>Programming (software coding) </li></ul><ul><li>Testing </li></ul><ul><li>Deployment </li></ul><ul><li>Documentation </li></ul><ul><li>Training and Support </li></ul><ul><li>Maintenance </li></ul>
  19. 19. Software Facts and Figures <ul><li>Maintenance consumes 40-80% of software costs during the lifetime of a software system -- the most important part of the lifecycle </li></ul><ul><li>Error correction accounts for 17% of software maintenance costs </li></ul><ul><li>Enhancement is responsible for 60% of software maintenance costs -- most of the cost is adding new capability to old software, NOT ‘fixing’ it. </li></ul><ul><li>Relative time spent on phases of the lifecycle </li></ul><ul><ul><li>Development -- defining requirements (15%), design (20%), programming (20%), testing and error removal (40%), documentation (5%) </li></ul></ul><ul><ul><li>Maintenance -- defining the change (15%), documentation review (5%), tracing logic (25%), implementing the change (20%), testing (30%), updating documentation (5%) </li></ul></ul>RL Glass. Facts and Fallacies of Software Engineering. 2003
  20. 20. Software development models <ul><li>Waterfall model </li></ul><ul><ul><li>specification --> development --> testing --> deployment </li></ul></ul><ul><ul><li>Although many use this still, it is flawed and at the root of much of the waste in software development today </li></ul></ul><ul><li>Evolutionary development -- interleaves activities of specification, development, and validation (testing) </li></ul>
  21. 21. Evolutionary development <ul><li>Exploratory Development </li></ul><ul><ul><li>work with customer/users to explore their requirement and deliver a final system. The development starts with the parts of the system that are understood. New features are added in an evolutionary fashion. </li></ul></ul><ul><li>Throw-away prototyping </li></ul><ul><ul><li>create a prototype (not formal system), which allows for understanding of the customer/users requirements. Then one builds “the real thing” </li></ul></ul>Sommerville, Software Engineering , 2004
  22. 22. Spiral Model <ul><ul><li>Spiral Model - process that goes through all steps of the software development lifecycle repeatedly, with each cycle ending up with a prototype for the user to see -- it is just for getting the requirements “right”, the prototypes are discarded after each iteration </li></ul></ul>
  23. 23. Challenges with Evolutionary Development <ul><li>The process is not visible to management -- managers often need regular deliverables to measure progress. </li></ul><ul><ul><li>causes a disconnect as managers want “evidence of progress”, yet the evolutionary process is fast and dynamic making ‘deliverables’ not cost-effective to produce (they change often) </li></ul></ul><ul><li>System can have poor structure </li></ul><ul><ul><li>Continual change can create poor system structure </li></ul></ul><ul><ul><li>Incorporating changes becomes more and more difficult </li></ul></ul>Sommerville, Software Engineering , 2004
  24. 24. Agile software development <ul><li>Refers to a group of software development methods that promote iterative development, open collaboration, and adaptable processes </li></ul><ul><li>Key characteristics </li></ul><ul><ul><li>minimize risk by developing software in multiple repetitions (timeboxes), iterations last 2-4 weeks </li></ul></ul><ul><ul><li>Each iteration passes through a full software development lifecycle - planning, requirements gathering, design, writing unit tests, then coding until the unit tests pass, acceptance testing by end-users </li></ul></ul><ul><ul><li>Emphasizes face-to-face communication over written communication </li></ul></ul>
  25. 25. Agile software methods <ul><li>Scrum </li></ul><ul><li>Crystal Clear </li></ul><ul><li>Extreme Programming </li></ul><ul><li>Adaptive Software Development </li></ul><ul><li>Feature Driven Development </li></ul><ul><li>Test Driven Development </li></ul><ul><li>Dynamic Systems Development </li></ul>
  26. 26. Scrum <ul><li>A type of Agile methodology </li></ul><ul><li>Composed of “sprints” that run anywhere from 15-30 days during which the team creates an increment of potentially shippable software . </li></ul><ul><li>The features that go into that ‘sprint’ version come from a “product backlog”, a set of prioritized high level requirements of work to be done </li></ul><ul><li>During a ‘backlog meeting’, the product owner tells the team of the items in the backlog they want completed. </li></ul><ul><li>The team decides how much can be completed in the next sprint </li></ul><ul><li>*requirements are frozen for a sprint” -- no wandering or scope shifting... </li></ul>http://en.wikipedia.org/wiki/Scrum_(development )
  27. 27. Scrum and useable software... <ul><li>A key feature of Scrum is the idea that one creates useable software with each iteration </li></ul><ul><ul><li>It forces the team to architect “the real thing” from the start -- not a “prototype” that is only developed for demonstration purposes </li></ul></ul><ul><ul><ul><li>For example, a system would start by using the planned architecture (web based application using java 2 enterprise architecture, oracle database, etc...) </li></ul></ul></ul><ul><ul><ul><li>It helps to uncover many potential problems with the architecture, particularly one that requires a number of integrated components (drivers that don’t work, connections between machines, software compatibility with the operating system, digital certificate compatibility or usability, etc...) </li></ul></ul></ul><ul><ul><ul><li>It allows users and management to actually use the software as it is being built.... invaluable! </li></ul></ul></ul>
  28. 28. Scrum team roles <ul><li>Pigs and Chickens -- think scrambled eggs and bacon -- the chicken is supportive, but the pig is committed. </li></ul><ul><li>Scrum “pigs” are committed the building the software regularly and frequently </li></ul><ul><ul><li>Scrum Master -- the one who acts as a project manager and removes impediments to the team delivering the sprint goal. Not the leader of the team, but buffer between team and any chickens or distracting influences. </li></ul></ul><ul><ul><li>Product owner -- the person who has commissioned the project/software. Also known as the “sponsor” of the project. </li></ul></ul><ul><li>Scrum “chickens” are everyone else </li></ul><ul><ul><li>Users, stakeholders (customers, vendors), and other managers </li></ul></ul>
  29. 29. Adaptive project management <ul><li>Scrum general practices </li></ul><ul><ul><li>customers become part of the development team (you have to have interested users...) </li></ul></ul><ul><ul><li>Scrum is meant to deliver working software after each sprint, and the user should interact with this software and provide feedback </li></ul></ul><ul><ul><li>Transparency in planning and development -- everyone should know who is accountable for what and by when </li></ul></ul><ul><ul><li>Stakeholder meetings to monitor progress </li></ul></ul><ul><ul><li>No problems are swept under the carpet -- nobody is penalized for uncovering a problem </li></ul></ul>http://en.wikipedia.org/wiki/Scrum_(development )
  30. 30. Typical Scrum Artifacts <ul><li>Spring Burn Down Chart </li></ul><ul><ul><li>a chart showing the features for that sprint and the daily progress in completing these </li></ul></ul><ul><li>Product Backlog </li></ul><ul><ul><li>a list of the high level requirements (in plain ‘user speak’) </li></ul></ul><ul><li>Sprint Backlog </li></ul><ul><ul><li>A list of tasks to be completed during the sprint </li></ul></ul>
  31. 31. Agile methods and systems <ul><li>Agile works well for small to medium sized projects (around 50,000 - 100,000 lines of source code) </li></ul><ul><li>Difficult to implement in large, complex system development with hundreds of developers in multiple teams </li></ul><ul><li>Requires each team be given “chunks of work” that they can develop </li></ul><ul><li>Integration is key -- need to use standard components and standards for coding, interconnecting, data modeling so each team does not create their own naming conventions and interfaces to their components. </li></ul>
  32. 32. Quality assurance <ul><li>The MOST IMPORTANT ASPECT of software development </li></ul><ul><li>Quality Assurance does not start with “testing” </li></ul><ul><li>Quality Assurance starts at the requirements gathering stage </li></ul><ul><ul><li>“ software faults” -- when the software does not perform as the user intended </li></ul></ul><ul><ul><ul><li>bugs </li></ul></ul></ul><ul><ul><ul><ul><li>requirements are good/accurate, but the programming causes a crash or other abnormal state that is unexpected </li></ul></ul></ul></ul><ul><ul><ul><ul><li>requirements were wrong, programming was correct -- still a bug from the user’s perspective </li></ul></ul></ul></ul>
  33. 33. Some facts about bugs <ul><li>Bugs in the form of poor requirements gathering or poor communication with programmers is by far the most expense in a software development effort </li></ul><ul><li>Bugs caught at the requirements or design stage are cheap </li></ul><ul><li>Bugs caught in the testing phase are expensive to fix </li></ul><ul><li>Bugs not caught are VERY EXPENSIVE in many ways </li></ul><ul><ul><li>loss of customers/user trust </li></ul></ul><ul><ul><li>need to “fix” it quick -- lends itself to yet more problems because everyone is panicking to get it fixed asap. </li></ul></ul>
  34. 34. Software testing <ul><li>System Testing </li></ul><ul><ul><li>“ black box” testing </li></ul></ul><ul><ul><li>“ white box” testing </li></ul></ul><ul><li>Regression Testing </li></ul>
  35. 35. Black box testing <ul><li>Treats software as a black-box without knowledge of its interior workings </li></ul><ul><li>It focuses simply on testing the functionality according to the requirements </li></ul><ul><li>Tester inputs data, and sees the output from the process </li></ul>
  36. 36. White box testing <ul><li>Tester has knowledge of the internal data structures and algorithms </li></ul><ul><li>Types of white box testing </li></ul><ul><ul><li>Code Coverage - The tester creates tests to cause all statements in the program to be executed at least once </li></ul></ul><ul><ul><li>Mutation Testing - software code is created that modifies the software slightly to emulate typical user mistakes (using the wrong operator or variable name). Meant to test whether code is ever used. </li></ul></ul><ul><ul><li>Fault injection - Introduce faults in the system on purpose to test error handling. Makes sure the error occurs as expected and the system handles the error rather than crashing or causing an incorrect state or response. </li></ul></ul><ul><ul><li>Static testing - primarily syntax checking and manual reading of the code to check errors (code inspections, walkthroughs, code reviews) </li></ul></ul>
  37. 37. Test Plan <ul><li>Outlines the ways in which tests will be developed, the naming and classification for the various failed tests (critical, show stopper, minor, etc..) </li></ul><ul><li>Outlines the features to be tested, the approach to be used, suspension criteria (the conditions under which a test fails) </li></ul><ul><li>Describes the environment -- the test environment, including hardware, networking, databases, software, operating system, etc.. </li></ul><ul><li>Schedule -- lays out a schedule for the testing </li></ul><ul><li>Acceptance criteria - an objective quality standard that the software must meet in order to be considered ready for release (minimum defect count and severity levels, minimum test coverage, etc...) </li></ul><ul><li>Roles and responsibilities -- who does what in the testing process </li></ul>
  38. 38. Test cases <ul><li>A description of a specific ‘test’ or interaction to test a single behavior or function in the software </li></ul><ul><li>Similar to ‘use cases’ as they outline a scenario of interaction -- however, one can have many tests for a single use case </li></ul><ul><ul><li>Example -- login is a use case; need a test for successful login, one for unsuccessful login, one to test the expiration, lockout, how many tries before lockout, etc.. </li></ul></ul>
  39. 39. Components of a test case <ul><li>Name and number for the test case </li></ul><ul><li>The requirement(s) or feature(s) the test case is exercising </li></ul><ul><li>Preconditions -- what must be set in place for the test to take place </li></ul><ul><ul><li>example, to test whether one can register a death certificate, one must have a death certificate filled out and which has passed validations and has been submitted to the local registrar... </li></ul></ul><ul><li>Steps -- list of steps describing how to perform the test (log in, select patient A, select medication list, pick Amoxicillin, click ‘submit to pharmacy’, etc..) </li></ul><ul><li>Expected results - describe the expected results up front so the tester knows whether it failed or passed. </li></ul>
  40. 41. Regression testing <ul><li>designed to find ‘software regressions’ -- when previously working functionality is now not working because of changes made in other parts of the system </li></ul><ul><li>As software is versioned, this is the most common type of bug or “fault” </li></ul><ul><li>The list of ‘regression tests’ grows </li></ul><ul><ul><li>a test for the functions in all previous versions </li></ul></ul><ul><ul><li>a test for any previously found bugs -- create a test to test that scenario </li></ul></ul><ul><li>Manual vs. Automated </li></ul><ul><ul><li>mostly done manually, but can be automated -- we have automated 500 tests </li></ul></ul>
  41. 42. Risk is good.... huh? <ul><li>There is no worthwhile project that has no risk -- risk is part of the game </li></ul><ul><li>Those that run away from risk and focus on what they know never advance the standard and leave the field open to their competitors </li></ul><ul><li>Example: Merryl Lynch ignored online trading at first, allowing other brokerage firms to create a new market - eTrade, Fidelity, Schwab. Merril Lynch eventually entered 10 years later. </li></ul><ul><li>Staying still (avoiding risk) means you are moving backwards </li></ul><ul><ul><li>Bob Charrette’s Risk Escalator -- everyone is on an escalator and it is moving against you, you have to walk to stay put, run to get ahead. If you stop, you start moving backwards </li></ul></ul>DeMarco and Lister. Waltzing with Bears: Managing Risk on Software Projects. 2003.
  42. 43. But don’t be blind to risk <ul><li>Sometimes those who are big risk takers have a tendency to emphasize positive thinking by ignoring the consequences of the risk they are taking </li></ul><ul><li>If there are things that could go wrong, don’t be blind to them -- they exist and you need to recognize them. </li></ul><ul><li>If you don’t think of it, you could be blind-sided by it </li></ul>DeMarco and Lister. Waltzing with Bears: Managing Risk on Software Projects. 2003.
  43. 44. Examples of risks <ul><li>BCT.org -- a dependency on externally built and maintained software (caMATCH) </li></ul><ul><li>BCT.org -- a need to have a hard “launch” date </li></ul><ul><li>eCareNet -- a dependency on complex software only understood by a small group of “gurus” (Tolven system) </li></ul><ul><li>TRANSCEND -- integration of system components that have never been integrated before (this is common -- first time integration). </li></ul><ul><li>TRANSCEND -- clinical input to CRF process has never been done before. </li></ul><ul><li>TRANSCEND -- involves multiple sites not under our control, user input will be difficult to obtain because everyone is busy, training will be difficult because everyone is busy, there are likely detractors already and we have not voice in their venue </li></ul>“ Risk management often gives you more reality than you want.” -- Mike Evans, Senior VP, ASC Corporation
  44. 45. Managing risks <ul><li>What is a risk? -- “a possible future event that will lead to an undesirable outcome” </li></ul><ul><li>Not all risks are the same </li></ul><ul><ul><li>they have different probabilities that they will happen </li></ul></ul><ul><ul><li>They have different consequences -- high impact, low impact </li></ul></ul><ul><ul><li>Some may or may not have alternative actions to avoid or mitigate the risk if it comes to pass -- “is there a feasible plan B” </li></ul></ul><ul><li>“ Problem” -- a risk is a problem that is yet to occur, a problem is a risk that has occurrred </li></ul><ul><li>“ Risk transition” -- when a risk becomes a problem, thus it is said the risk ‘materialized’ </li></ul><ul><li>“ Transition indicator” -- things that suggest the risk may transition to a problem. Example -- Russia masses troops on the Georgian border... </li></ul>DeMarco and Lister. Waltzing with Bears: Managing Risk on Software Projects. 2003.
  45. 46. Managing risks <ul><li>Mitigation - steps you take before the transition or after to make corrections (if possible) or to minimize the impact of the now “problem”. </li></ul><ul><li>Steps in risk management </li></ul><ul><ul><li>risk discovery </li></ul></ul><ul><ul><li>exposure analysis (impact analysis) </li></ul></ul><ul><ul><li>contingency planning -- creating planB, planC, etc.. as options to engage if the risk materializes </li></ul></ul><ul><ul><li>mitigation -- steps taken before transition to make contingency actions possible </li></ul></ul><ul><ul><li>transition monitoring -- tracking of managed risks, looking for transitions and materializations (risk management meetings). </li></ul></ul>DeMarco and Lister. Waltzing with Bears: Managing Risk on Software Projects. 2003.
  46. 47. Common software project risks <ul><li>Schedule flaw - almost always due to neglecting work or minimizing work that is necessary </li></ul><ul><li>Scope creep (requirements inflation) or scope shifting (because of market conditions or changes in business requirements) -- inevitable -- don’t believe you can keep scope ‘frozen’ for very long </li></ul><ul><ul><li>recognize it, create a mitigation strategy, recognize transition, and create a contingency </li></ul></ul><ul><ul><li>for example, if requirements need to be added or changed, need to make sure ‘management’ is aware of the consequences and adjustments are made in capacity, expectation, timeline, budget. </li></ul></ul><ul><ul><li>It is not bad to change scope -- it is bad to change scope and believe nothing else needs to change </li></ul></ul>DeMarco and Lister. Waltzing with Bears: Managing Risk on Software Projects. 2003.
  47. 48. “Post mortem” evaluations <ul><li>No project is “100% successful” -- they all have problems, some have less than others, some have fatal problems. </li></ul><ul><li>It is critical to evaluate projects after they are completed to characterize common risks/problems and establishing methods of mitigation before the next project </li></ul>
  48. 49. Capability Maturity Model (CMM) <ul><li>A measure of the ‘maturity’ of an organization in how they approach projects </li></ul><ul><li>Originally developed as a tool for assessing the ability of government contractors processes to perform a contracted software project (can they do it?) </li></ul><ul><li>Maturity Levels -- 1-5. Level 5 is where a process is optimized by continuous process improvement </li></ul>
  49. 50. CMM in detail <ul><li>Level 1 - Ad hoc: -- processes are undocumented and in a state of dynamic change, everything is ‘ad hoc’ </li></ul><ul><li>Level 2 - Repeatable: -- some processes are repeatable with possibly consistent reults </li></ul><ul><li>Level 3 - Defined: -- set of defined and documented standard processes subject to improvement over time </li></ul><ul><li>Level 4 - Managed: --using process metrics to control the process. Management can iddntify ways to adjust and adapt the process </li></ul><ul><li>Level 5 - Optimized: -- process improvement objectives are established (post mortem evaluation...), and process improvements are developed to address common causes of process variation. </li></ul>
  50. 51. Why medical software is hard... Courtesy Dr. Andy Coren, Health Information Technology: A Clinician’s View. 2008
  51. 52. Healthcare IT failures <ul><li>Hard to discover -- nobody airs dirty laundry </li></ul><ul><li>West Virginia -- system has to be removed a week after implementation </li></ul><ul><li>Mt Sinai -- 6 weeks after implementation, system is “rolled back” due to staff complaints </li></ul>

×