Traditional Testing is a Failure


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • the Hungarian émigrés who were instrumental in the Manhattan Project (e.g., Leo Szilard, Edward Teller, János Von Neumann, and Jenó Pál Wigner. Eniro Fermi referred to these brilliant Hungarian scientists as “the Martians,” based on speculation that a spaceship from Mars that dropped them all off in Budapest in the early 1900’s Szilard's ideas included the linear accelerator, cyclotron, electron microscope, and nuclear chain reaction. Equally important was his insistence that scientists accept moral responsibility for the consequences of their work. In his classic 1929 paper on Maxwell's Demon, Szilard identified the unit or "bit" of information. The World Wide Web that you now travel, and the computers that make it possible, show the importance of his long-unappreciated idea. Edward Teller is a senior research fellow at the Hoover Institution, where he specializes in international and national policies concerning defense and energy. Teller is most widely known for his significant contributions to the first demonstration of thermonuclear energy; in addition he has added to the knowledge of quantum theory, molecular physics, and astrophysics. He served as a member of the General Advisory Committee of the U.S. Atomic Energy Commission from 1956 to 1958 and was chairman of the first Nuclear Reaction Safeguard Committee. Szilard bio - Wigner bio - teller info - Von Neumann info - Von Neumann & Wigner's high school in Budapest -
  • This presentation discusses: Where V&V fits within the Software Product Development Life Cycle Which of the people, product and process skills are the most relevant to V&V Compares and contrasts Static versus Dynamic Testing Explains the importance of V&V to the software project Reasons with those who feel that Software Reviews, Inspections and Walkthroughs are not economically feasible Describes the salient aspects of Static Testing: Why, What, Who, When, and How. Explains the purpose of testing Describes the major types of dynamic tests: unit, subsystem (integration),system, alpha, beta, user acceptance, regression Compares and contrast White Box and Black Box testing Describes Usability testing Explains how a manager knows when it is time to stop testing
  • The International Marine Contractors Association (IMCA) is the international trade association representing offshore diving, marine and underwater engineering companies. It was formed in April 1995 from the amalgamation of AODC (the International Association of Offshore Diving Contractors) and DPVOA (the Dynamic Positioning Vessel Owners Association). IMCA's Marine Division covers all aspects of vessel operations and marine equipment. Focusing on dynamic positioning, other key areas of interest are position reference equipment, reliability of systems and thruster-assisted systems. As part of the above extensive work program, the division runs a DP operators' logbook scheme and each year publishes a report on dynamically positioned (DP) failure incidents. The division has, of late, expanded its work to include a broader range of specialist vessel operations, in particular focusing on offshore crane and other lifting equipment issues. Since 1990, IMCA reports have demonstrated the risk inherent in DP system software. This figure shows the published information on DP incidents with the percentage caused by software. With an eleven year average of 20%; 1 incident in 5 caused by software, it would be assumed that any reasonably competent engineer would view the potential of a software problem as a high risk. [1]
  • This figure shows the percentage of “Loss of Position Class 1” DP problems that were caused by software, using data for the latest 5 years. This is the most serious class of DP incident that could result in “serious loss of position with serious actual or potential consequences”. On average 33%; 1 in 3; of the most serious DP incidents were attributable to software problems.
  • Traditional Testing is a Failure

    1. 1. Traditional Software Testing is a Failure! Linda Shafer University of Texas at Austin [email_address] Don Shafer Chief Technology Officer, Athens Group, Inc. [email_address]
    2. 2. Seminar Dedication Enrico Fermi referred to these brilliant Hungarian scientists as “the Martians,” based on speculation that a spaceship from Mars dropped them all off in Budapest in the early 1900’s.
    3. 3. And let us not forget…. <ul><li>1994   John Harsányi  (b. 5/29/1920, Budapest, Hungary, d. 2000)  &quot;For his pioneering analysis of equilibrium in the theory of non cooperative games.“ </li></ul><ul><li>Shared prize with A Beautiful Mind ‘’s John Nash </li></ul>
    4. 4. Special Thanks to: Rex Black RBCS, Inc. 31520 Beck Road Bulverde, TX 78163 USA Phone: +1 (830) 438-4830 Fax: +1 (830) 438-4831 [email_address] Gary Cobb’s testing presentations for SWPM as excerpted from: SOFTWARE ENGINEERING: a Practitioner’s Approach , by Roger Pressman, McGraw-Hill, Fourth Edition, 1997 Chapter 16: “Software Testing Techniques” Chapter 22: “Object-Oriented Testing” UT SQI SWPM Software Testing Sessions from 1991 to 2002 … for all they provided in past test guidance and direct use of some material from the University of Texas at Austin Software Project Management Certification Program.
    5. 5. Traditional Development Life Cycle Model REVIEW DETAIL DESIGN Process Steps Process Gates Prototypes REVIEW REQUIREMENTS DEFINITION REVIEW HIGH LEVEL DESIGN REVIEW SYSTEM CONSTRUCTION REVIEW VERIFICATION & VALIDATION REVIEW SYSTEM DELIVERY PROTOTYPE 1 PROTOTYPE 2 PROTOTYPE 3 POST IMPLEMENTATION REVIEW Project Management Support Processes Risk Reduction Training Planning Configuration Management Estimating Metrics Quality Assurance
    6. 6. Software Failures – NIMBY! <ul><li>It took the European Space Agency 10 years and $7 billion to produce Ariane 5, a giant rocket capable of hurling a pair of three-ton satellites into orbit with each launch and intended to give Europe overwhelming supremacy in the commercial space business. All it took to explode that rocket less than a minute into its maiden voyage last June, scattering fiery rubble across the mangrove swamps of French Guiana, was a small computer program trying to stuff a 64-bit number into a 16-bit space. 1996 James Gleick </li></ul><ul><li>Columbia and other space shuttles have a history of computer glitches that have been linked to control systems, including left-wing steering controls, but NASA officials say it is too early to determine whether those glitches could have played any role in Saturday's shuttle disaster. … NASA designed the shuttle to be controlled almost totally by onboard computer hardware and software systems. It found that direct manual intervention was impractical for handling the shuttle during ascent, orbit or re-entry because of the required precision of reaction times, systems complexity and size of the vehicle. According to a 1991 report by the General Accounting Office, the investigative arm of Congress, &quot;sequencing of certain shuttle events must occur within milliseconds of the desired times, as operations 10 to 400 milliseconds early or late could cause loss of crew, loss of vehicle, or mission failure.&quot;,10801,78135,00.html </li></ul><ul><li>After the hydraulic line on their V-22 Osprey aircraft ruptured, the four Marines on board had 30 seconds to fix the problem or perish. Following established procedures, when the reset button on the Osprey's primary flight control system lit up, one of the pilots — either Lt. Col. Michael Murphy or Lt. Col. Keith Sweaney — pushed it. Nothing happened. But as the button was pushed eight to 10 times in 20 seconds, a software failure caused the tilt-rotor aircraft to swerve out of control, stall and then crash near Camp Lejeune, N.C. …The crash was caused by &quot;a hydraulic system failure compounded by a computer software anomaly,&quot; he said. Berndt released the results of a Corps legal investigation into the crash at an April 5 Pentagon press briefing. </li></ul>
    7. 7. Software Failures – Offshore <ul><li>Material Handling </li></ul><ul><li>Drive Off under Dynamic Positioning </li></ul><ul><li>Computer science also differs from physics in that it is not actually a science. It does not study natural objects. Neither is it, as you might think, mathematics; although it does use mathematical reasoning pretty extensively. Rather, computer science is like engineering - it is all about getting something to do something, rather than just dealing with abstractions as in pre-Smith geology. </li></ul><ul><ul><li>Richard Feynman, Feynman Lectures on Computation, 1996, pg xiii </li></ul></ul>
    8. 8. Material Handling Under Software Control
    9. 9. Drive Off <ul><li>Drive-off is caused by a fault in the thrusters and/or their control system and a fault in the position reference system. As for the drift-off incidents, the failure frequencies are derived from the DPVOA database, which consists of mainly DP Class 2 vessels. For the drive-off incident, these figures are representative also for the DP Class 3 vessels. The major difference between DP Class 2 and 3 is that only the latter is designed to withstand fire and flooding in any one compartment. These two incident types do not affect the drive-off frequency significantly due to the fact that drive-off is caused by logical failures in the thrusters, their control system and/or the position reference system. Hence, the DP Class 2 figures are used directly also for the DP Class 3 vessel. </li></ul><ul><li>The frequency of fault in the thrusters and/or their control system can be obtained from data in Ref. /A-4/. The control system is here meant to include all relevant software and hardware. The statistics show that the software failure is about four times as frequent as the hardware failure and slightly more frequent than the pure thruster failure. </li></ul><ul><ul><li>KONGSBERG OFFSHORE A.S. </li></ul></ul><ul><ul><ul><li>DEMO 2000 - LIGHT WELL INTERVENTION </li></ul></ul></ul><ul><ul><ul><li>HSE EVALUATION </li></ul></ul></ul><ul><ul><ul><li>REPORT NO. 00-4002, REVISION NO. 00 </li></ul></ul></ul>
    10. 10. IMCA* Software Incidents as a Percentage of the Total *International Marine Contractors Association 0% 20% 40% 60% 80% 100% 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 Average Total Software Other Incidents
    11. 11. Class 1: “Serious loss of position with serious actual or potential consequences.”
    12. 13. General Testing Remarks <ul><ul><li>a critical element of software quality assurance </li></ul></ul><ul><ul><li>represents the ultimate review of specification, design and coding </li></ul></ul><ul><ul><li>may use up to 40% of the development resources </li></ul></ul><ul><ul><li>the primary goal of testing is to think of as many ways as possible to bring the newly-developed system down </li></ul></ul><ul><ul><li>demolishing the software product that has just been built </li></ul></ul><ul><ul><li>requires learning to live with humanness of practitioners </li></ul></ul><ul><ul><li>requires an admission of failure in order to be good at it </li></ul></ul>
    13. 14. Organizing for Software Testing <ul><li>The software developer is always responsible for testing the modules </li></ul><ul><li>An independent test group removes the COI of expecting the developer perform other tests </li></ul><ul><li>Customer witnessed testing </li></ul>System Engineering Software Requirements Software Design Code Unit Test Validation test Integration test System test
    14. 15. Cost Impact of Software Defects <ul><li>Benefits of formal reviews </li></ul><ul><ul><li>early discovery of software defects </li></ul></ul><ul><ul><li>design activities introduce between 50%-65% or all defects of the development process </li></ul></ul><ul><ul><li>formal reviews can find up to 75% of the design flaws </li></ul></ul>design 1% coding 8% release 73% testing 16% Model Output 1 4 7 0 5 10 15 1 4 7 10 13 16 PDR CDR TRR FCA/PCA Staffing mo.
    15. 16. Cost Schedule Product Quality Ease-of-Management Customer Loyalty Confidence Among Developers Capitalized Reuse 0 20 40 % REQ PD DD CUT IT O&M OO Conventional Percent Effort by Phase Adjusting the Software Development Life Cycle
    16. 17. Questions People Ask About Testing <ul><li>Should testing instill guilt? </li></ul><ul><li>Is testing really destructive? </li></ul><ul><li>Why aren’t there methods of eliminating bugs at the time they are being injected? </li></ul><ul><li>Why isn’t a successful test one that inputs data correctly, gets all the right answers and outputs them correctly? </li></ul><ul><li>Why do test cases need to be designed? </li></ul><ul><li>Why can’t you test most programs completely? </li></ul><ul><li>Why can’t the software developer simply take responsibility for finding/fixing all his/her errors? </li></ul><ul><li>How many testers does it take to change a light bulb? </li></ul>
    17. 18. Testing Objectives <ul><li>Glenford J. Myers (IBM, 1979) </li></ul><ul><ul><li>1. Testing is a process of executing a program with the intent of finding an error </li></ul></ul><ul><ul><li>2. A good test case is one that has a high probability of finding an as yet undiscovered error </li></ul></ul><ul><ul><li>3. A successful test is one that uncovers an as yet undiscovered error </li></ul></ul><ul><li>Test Information Flow </li></ul>Testing Evaluation Reliability Model Debug Software Configuration Test Configuration Test Results Expected Results Error rate data Errors Correctness Predicted Reliability
    18. 19. Testing Principles <ul><li>All tests should be traceable to customer requirements </li></ul><ul><li>Tests should be planned long before testing begins </li></ul><ul><li>The Pareto principle applies to software testing </li></ul><ul><ul><li>80% of all errors will likely be in 20% of the modules </li></ul></ul><ul><li>Testing should begin “in the small” and progress toward testing “in the large” </li></ul><ul><li>Exhaustive testing is not possible </li></ul><ul><li>Testing should be conducted by an independent third party </li></ul><ul><ul><li>“ The test team should always know how many errors still remain in the code </li></ul></ul><ul><ul><li>- for when the manager comes by.” </li></ul></ul>
    19. 20. About Software Testing <ul><li>Software testing - a multi-step strategy </li></ul><ul><ul><li>series of test case design methods that help ensure effective error detection </li></ul></ul><ul><ul><li>testing is not a safety net or replacement of software quality </li></ul></ul><ul><ul><ul><li>SQA typically conducts independent audits of a product’s compliance to standards </li></ul></ul></ul><ul><ul><ul><li>SQA assures that the test team follows the policies & procedures </li></ul></ul></ul><ul><ul><li>testing is not 100% effective in uncovering defects in software </li></ul></ul><ul><ul><ul><li>A real good testing effort will ship no more than 2.5 defects/KSLOC (thousand source lines of code) </li></ul></ul></ul>
    20. 21. Test Planning Requirements Inspections Software Architecture Inspections Design Inspections Code Inspections Test Inspections <ul><li>I. Scope of testing </li></ul><ul><li>II. Test Plan </li></ul><ul><ul><li>A. Test phases and builds </li></ul></ul><ul><ul><li>B. Schedule </li></ul></ul><ul><ul><li>C. Overhead software </li></ul></ul><ul><ul><li>D. Environment and resources </li></ul></ul><ul><li>III. Test procedure n </li></ul><ul><ul><li>A. Order of integration </li></ul></ul><ul><ul><ul><li>1. Purpose </li></ul></ul></ul><ul><ul><ul><li>2. Modules to be tested </li></ul></ul></ul><ul><ul><li>B. Unit tests for modules in build </li></ul></ul><ul><ul><ul><li>1. Description of tests for module m </li></ul></ul></ul><ul><ul><ul><li>2. Overhead software description </li></ul></ul></ul><ul><ul><ul><li>3. Expected results </li></ul></ul></ul><ul><ul><li>C. Test Environment </li></ul></ul><ul><ul><ul><li>1. Special tools or techniques </li></ul></ul></ul><ul><ul><ul><li>2. Scaffolding software description </li></ul></ul></ul><ul><ul><li>D. Test case data </li></ul></ul><ul><ul><li>E. Expected results for build n </li></ul></ul><ul><li>IV. Actual test results </li></ul><ul><li>V. References </li></ul><ul><li>VI. Appendices </li></ul>Test Specification Outline Glenford Myers’ Sandwich Method
    21. 22. Sample Review Checklist <ul><ul><li>Have major test phases properly been identified and sequenced? </li></ul></ul><ul><ul><li>Has traceability to validation criteria/requirements been established as part of software requirements analysis? </li></ul></ul><ul><ul><li>Are major functions demonstrated early? </li></ul></ul><ul><ul><li>Is the test plan consistent with the overall project plan? </li></ul></ul><ul><ul><li>Has a test schedule been explicitly defined? </li></ul></ul><ul><ul><li>Are test resources and tools identified and available? </li></ul></ul><ul><ul><li>Has a test record-keeping mechanism been established? </li></ul></ul><ul><ul><li>Have test drivers and stubs been identified and has work to develop them been scheduled? </li></ul></ul><ul><ul><li>Has stress testing for software been specified </li></ul></ul>
    22. 23. Sample Review Checklist <ul><li>Software Test Procedure </li></ul><ul><ul><li>Have both white and black box tests been specified? </li></ul></ul><ul><ul><li>Have all the independent logic paths been tested? </li></ul></ul><ul><ul><li>Have test cases been identified and listed with their expected results? </li></ul></ul><ul><ul><li>Is error handling to be tested? </li></ul></ul><ul><ul><li>Are boundary values to be tested? </li></ul></ul><ul><ul><li>Are timing and performance to be tested? </li></ul></ul><ul><ul><li>Has an acceptable variation from the expected results been specified? </li></ul></ul>
    23. 24. Bach’s 1994 Testability Checklist <ul><li>Testability - how easily a computer program can be tested depends on the following: </li></ul><ul><ul><ul><li>Operability </li></ul></ul></ul><ul><ul><ul><li>Observability </li></ul></ul></ul><ul><ul><ul><li>Controlability </li></ul></ul></ul><ul><ul><ul><li>Decomposability </li></ul></ul></ul><ul><ul><ul><li>Simplicity </li></ul></ul></ul><ul><ul><ul><li>Stability </li></ul></ul></ul><ul><ul><ul><li>Understandability </li></ul></ul></ul>
    24. 25. Some of the Art of Test Planning <ul><li>Test Time Execution Estimation </li></ul><ul><li>How Long to get the Bugs Out? </li></ul><ul><li>Realism not Optimism </li></ul><ul><li>Rules of Thumb </li></ul><ul><li>Light Weight Test Plan </li></ul>
    25. 26. How Can We Predict Test Execution Time? <ul><li>When will you be done executing the tests ? </li></ul><ul><li>Part of the answer is when you’ll have run all the planned tests once </li></ul><ul><ul><li>Total estimated test time (sum for all planned tests) </li></ul></ul><ul><ul><li>Total person-hours of tester time available per week </li></ul></ul><ul><ul><li>Time spent testing by each tester </li></ul></ul><ul><li>The other part of the answer is when you’ll have found the important bugs and confirmed the fixes </li></ul><ul><li>Estimate total bugs to find, bug find rate, bug fix rate, and closure period (time from find to close) for bugs </li></ul><ul><ul><li>Historical data really helps </li></ul></ul><ul><ul><li>Formal defect removal models are even more accurate </li></ul></ul>Copyright © 1996-2002 Rex Black
    26. 27. How Long to Run the Tests? <ul><li>It depends a lot on how you run tests </li></ul><ul><ul><li>Scripted vs. exploratory </li></ul></ul><ul><ul><li>Regression testing strategy (repeat tests or just run once?) </li></ul></ul><ul><li>What I often do </li></ul><ul><ul><li>Plan for consistent test cycles (tests run per test release) and passes (running each test once) </li></ul></ul><ul><ul><li>Realize that buggy deliverables and uninstallable builds slow test execution…and plan accordingly </li></ul></ul><ul><ul><li>Try to understand the amount of confirmation testing, as a large number of bugs leads to lots of confirmation testing </li></ul></ul><ul><ul><li>Check number of cycles with bug prediction </li></ul></ul><ul><li>Testers spend less than 100% of their time testing </li></ul><ul><ul><li>E-mail, meetings, reviewing bugs and tests, etc. </li></ul></ul><ul><ul><li>I plan six hours of testing in a nine-to-ten hour day (contractor) </li></ul></ul><ul><ul><li>Four hours of testing in an eight hour day is common (employee) </li></ul></ul>Copyright © 1996-2002 Rex Black
    27. 28. Copyright © 1996-2002 Rex Black
    28. 29. How Long to Get the Bugs Out? <ul><li>Using historical data and test cycle model, you can develop a simple model for bugs </li></ul><ul><li>Predicting the total number bugs is hard </li></ul><ul><ul><li>Subject to many factors and nonlinear effects </li></ul></ul><ul><ul><li>Common techniques: bugs per developer-day, KSLOC, or function point (FP) </li></ul></ul><ul><ul><li>Project size is usually the easiest factor to estimate </li></ul></ul><ul><li>Bug injection rates, fix rates, and closure periods are beyond the test team’s control </li></ul><ul><ul><li>You can predict, but document assumptions </li></ul></ul><ul><ul><li>The more historical data you have for similar (size, staff, technology, etc.) projects, the more accurate and confident you can be </li></ul></ul>Copyright © 1996-2002 Rex Black
    29. 30. This chart was created in about one hour using historical data from a couple projects and some simplified models of bug find/fix rates. The more data you have, the more accurate your model can be, and the more you can predict the accuracy. Copyright © 1996-2002 Rex Black
    30. 31. Objective: Realism, Not Optimism <ul><li>Effort, dependencies, resources accurately forecast </li></ul><ul><li>Equal likelihood for each task to be early as late </li></ul><ul><li>Best-case and worst-case scenarios known </li></ul><ul><li>Plan allows for corrections to early estimates </li></ul><ul><li>Risks identified and mitigated, especially… </li></ul><ul><ul><li>Gaps in skills </li></ul></ul><ul><ul><li>Risky technology </li></ul></ul><ul><ul><li>Logistical issues and other tight critical paths </li></ul></ul><ul><ul><li>Excessive optimism </li></ul></ul>“ [Optimism is the] false assumption that… all will go well , i.e., that each task will take only as long as it ‘ought’ to take .” --Fred Brooks, in The Mythical Man-month , 1975. Copyright © 1996-2002 Rex Black
    31. 32. How About those “Rules of Thumb”? <ul><li>Various rules of thumb exist for effort, team size, etc., e.g., developer-to-tester ratios, Jones’ Estimating Software Costs </li></ul><ul><li>How accurate are these rules? </li></ul><ul><ul><li>Jones’ test development rule: ~ 1 hr/case </li></ul></ul><ul><ul><li>Across four projects, I averaged 9 hrs./case: almost an order of magnitude difference </li></ul></ul><ul><ul><li>Imagine if I’d used Jones’ rule on IA project </li></ul></ul><ul><ul><li>Even on my own projects, lots of variation </li></ul></ul><ul><li>However, historical data from previous projects with similar teams, technologies, and techniques is useful for estimation </li></ul><ul><li>You can adjust data from dissimilar projects if you understand the influential factors </li></ul>Person-hours per test case developed varied widely across four projects. Factors included test precision, number of conditions per case, automation, custom tool development, test data needs, and testware re-use Copyright © 1996-2002 Rex Black
    32. 33. Refining the Estimate <ul><li>For each key deliverable, look at size and complexity </li></ul><ul><ul><li>Small, medium, large and simple, moderate, complex </li></ul></ul><ul><ul><li>Do you have data from past projects that can help you confirm the effort and time to develop deliverables of similar size and complexity? </li></ul></ul><ul><li>Iterate until it works </li></ul><ul><ul><li>Must fit overall project schedule… </li></ul></ul><ul><ul><li>… but not disconnected from reality </li></ul></ul><ul><ul><li>Check “bottom-up” (work-breakdown-structure) estimates against “top-down” rules of thumb, industry benchmarks, models relating metrics like FP or KSLOC to test size </li></ul></ul><ul><ul><li>Estimates will change (by as much as 200%) as the project proceeds (knowledge is gained, requirements evolve) </li></ul></ul>Copyright © 1996-2002 Rex Black
    33. 34. A Lightweight Test Plan Template <ul><li>Overview </li></ul><ul><li>Bounds </li></ul><ul><ul><li>Scope </li></ul></ul><ul><ul><li>Definitions </li></ul></ul><ul><ul><li>Setting </li></ul></ul><ul><li>Quality Risks </li></ul><ul><li>Schedule of Milestones </li></ul><ul><li>Transitions </li></ul><ul><ul><li>Entry Criteria </li></ul></ul><ul><ul><li>Continuation Criteria </li></ul></ul><ul><ul><li>Exit Criteria </li></ul></ul><ul><li>Test Configurations and Environments </li></ul><ul><li>Test Development </li></ul><ul><li>Test Execution </li></ul><ul><ul><li>Key Participants </li></ul></ul><ul><ul><li>Test Case and Bug Tracking </li></ul></ul><ul><ul><li>Bug Isolation and Classification </li></ul></ul><ul><ul><li>Release Management </li></ul></ul><ul><ul><li>Test Cycles </li></ul></ul><ul><ul><li>Test Hours </li></ul></ul><ul><li>Risks and Contingencies </li></ul><ul><li>Change History </li></ul><ul><li>Referenced Documents </li></ul><ul><li>Frequently Asked Questions </li></ul>You can find this template, a case study using it, and three other test plan templates at Copyright © 1996-2002 Rex Black
    34. 35. How well did you test??? Quest ions ???
    35. 36. Linda Shafer Bio: <ul><li>Linda Shafer has been working with the software industry since 1965, beginning with NASA in the early days of the space program. Her experience includes roles of programmer, designer, analyst, project leader, manager, and SQA/SQE. She has worked for large and small companies, including IBM, Control Data Corporation, Los Alamos National Laboratory, Computer Task Group, Sterling Information Group, and Motorola. She has also taught for and/or been in IT shops at The University of Houston, The University of Texas at Austin, The College of William and Mary, The Office of the Attorney General (Texas) and Motorola University. Ms. Shafer's publications include 25 refereed articles, and three books. She currently works for the Software Quality Institute and co-authored a SQI Software Engineering Series book published by PrenHall in 2002: Quality Software Project Management. She is on the International Press Committee of the IEEE and an author in the Software Engineering Series books for IEEE. Her MBA is from the University of New Mexico. </li></ul>
    36. 37. Don Shafer Bio: <ul><li>Don Shafer is a co-founder, corporate director and Chief Technology Officer of Athens Group, Inc. Incorporated in June 1998, Athens Group is an employee-owned consulting firm, integrating technology strategy and software solutions. Prior to Athens Group, Shafer led groups developing and marketing hardware and software products for Motorola, AMD and Crystal Semiconductor. He was responsible for managing a $129 million-a-year PC product group that produced award-winning audio components. From the development of low-level software drivers in yet-to-be-released Microsoft operating systems to the selection and monitoring of Taiwan semiconductor fabrication facilities, Shafer has led key product and process efforts. In the past three years he has led Athens engineers in developing industry standard semiconductor fab equipment software interfaces, definition of 300mm equipment integration tools, advanced process control state machine data collectors and embedded system control software agents. His latest patents are on joint work done with Agilent Technologies in state-based machine control. He earned a BS degree from the USAF Academy and an MBA from the University of Denver. Shafer’s work experience includes positions held at Boeing and Los Alamos National Laboratories. He is currently an adjunct professor in graduate software engineering at Southwest Texas His faculty web site is With two other colleagues in 2002, he wrote Quality Software Project Management for Prentice-Hall now used in both industry and academia. Currently he is working on an SCM book for the IEEE Software Engineering Series. </li></ul>