Upcoming SlideShare
Loading in...5







Total Views
Views on SlideShare
Embed Views



0 Embeds 0

No embeds



Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

slides slides Presentation Transcript

  • CS 160: Software Engineering April 8 Class Meeting Department of Computer Science San Jose State University Spring 2008 Instructor: Prof. Ron Mak http://
  • Unofficial Field Trip?
    • Computer History Museum in Mt. View.
      • See
      • You provide your own transportation to and from the museum.
      • Completely voluntary, no credit.
    • Approximately 2 hours during a Saturday afternoon later this month or early next month.
      • Docent-led tour of various computer systems and artifacts from the 1940s through the 1990s.
        • Eniac, Enigma, SAGE, IBM 360, Cray supercomputers, etc.
      • See, hear, and touch an actual IBM 1401 computer system (c. 1959-1965) that has been restored and is operational:
        • 800 cpm card reader
        • flashing console lights
        • spinning tape drives with vacuum columns
        • 600 lpm line printer
  • Software Reliability
    • Reliable software has a low probability of failure while operating for a specified period of time under specified operating conditions.
      • The specified time period and the specified operating conditions are part of the nonfunctional requirements.
      • For some applications, reliability may be more important than the other functional and nonfunctional requirements.
        • mission-critical applications
        • medical applications
    View slide
  • Software Reliability (cont’d)
    • Reliable software is the result of good software quality assurance (SQA) throughout an application’s life cycle.
      • Design
        • good requirements elicitation
        • good object-oriented design and analysis
        • good architecture
      • Development
        • good management (e.g., source control, reasonable schedules)
        • good coding practices (e.g., design patterns)
        • good testing practices
      • Deployment
        • good preventive maintenance (e.g., training)
        • good monitoring
        • good failure analysis (when failures do occur)
    View slide
  • Software Testing
    • Some key questions:
      • What is testing?
      • What is a successful test?
      • Who does testing?
      • When does testing occur?
      • What are the different types of testing?
      • What testing tools are available?
      • How do you know your tests covered everything?
      • When can you stop testing?
  • What is testing?
    • Testing is a systematic procedure to discover faults in software in order to prevent failure .
      • Failure: A deviation of the software’s behavior from its specified behavior (as per its requirements)
        • Can be minor to major (such as a crash)
      • Erroneous state: A state that the operating software is in that will lead to a failure.
        • Example: low on memory
      • Fault: What caused the software to enter an erroneous state.
        • Also known as: defect, bug
        • Example: a memory leak
  • What is a successful test?
    • Testing is the opposite of coding.
      • Coding: Create software and try to get it to work.
      • Testing: Break the software and demonstrate that it doesn’t work.
    • Testing and coding require different mind sets.
      • It can be very difficult for developers to test their own code.
      • If you wrote the code, you psychologically want it to work and not see it break.
      • Since you know how the code should be used, you may not think to try using it in ways other than as you intended.
    • A successful test is one that finds bugs.
  • Who does testing?
    • Developers
      • As difficult as it may be, you must test your own code .
      • Test each other’s code (peer testing)
        • test interfaces
    • Users
    • Testers
      • Members of the Testing or Quality Assurance (QA) department.
      • Software engineers who did not write the code.
      • Manual writers and trainers who create examples and demos.
  • When does testing occur?
    • Recall the Old Waterfall Model :
    • In the new Agile Methodology, testing is part of each and every iteration .
      • Therefore, testing occurs throughout development, not just at the end.
    XXX Requirements Design Implementation Testing
  • What are the different types of testing?
    • Usability testing
      • Developers and users test the user interface.
    • Unit testing
      • Developers test an individual “unit”.
      • Unit: A small set of related components, such as to implement a use case.
    • Integration testing
      • Developers test how their units work together with other units.
    • System testing
      • Test how an entire system works.
      • Includes performance testing and stress testing.
  • Alpha Testing vs. Beta Testing
    • Alpha testing
      • Usability and system testing of a nearly complete application in the development environment .
    • Beta testing
      • Usability and system testing of a complete or nearly complete application in the user’s environment .
    • Today, it is not uncommon to release an application to the public for beta testing.
      • New releases of web-based applications are put “into beta” by software companies.
  • Usability Testing: The German Newton
    • The Apple Newton was an early PDA developed in the early 1990s.
      • Besides the English version, there was a German and a Japanese version.
      • It was too far ahead of its time and was killed by Steve Jobs after he returned to Apple.
    • A key feature of the Newton was handwriting recognition : ‹‹ Handschrifterkennung ››
      • The Newton recognized successive words in a sentence using an algorithm that tracked the movement of the stylus.
    • Cultural differences between Americans and Germans
      • The way the Germans write their letters (e.g., the letter h).
      • The way the Germans write words (e.g., ‹‹ Gesch ä ftsreise ›› “business trip”).
      • Philosophy about personal handwriting.
  • Usability Testing: NASA Mars Rover Mission
    • Usability testing for the Collaborative Information Portal (CIP) software system.
    • NASA’s Jet Propulsion Laboratory (JPL) conducted several Operational Readiness Tests (ORTs) before the actual rovers landed.
      • A working rover was inside of a large “sandbox” in a separate warehouse building at JPL.
      • Mission control personnel communicated with and operated the sandbox rover as if it were on Mars.
        • A built-in delay simulated the signal travel time between Earth and Mars.
      • Mission scientists accessed and analyzed the data and images downloaded by the sandbox rover.
    • All mission software, including CIP, was intensely tested in this simulated environment.
  • Unit Testing
    • Each unit test focuses on components created by a developer.
      • Should be done by the developer before checking in the code.
      • Easier to find and fix bugs when there are fewer components.
      • Bottom-up testing.
    • Developers create test cases to do unit tests.
  • Unit Testing: Test Cases
    • Test case: A set of input values for the unit and a corresponding set of expected output values.
      • Use case -> test case
      • Do unit testing on the participating objects of the use case.
    • A test case can be run within a testing framework (AKA test bed, test harness) consisting of a test driver and one or more test stubs.
      • Test driver: Simulates the part of the system that calls the unit. Calls the unit and passes in the input values.
      • Test stub: Simulates the components that the unit depends on.
        • When called by the unit, a test stub responds in a “reasonable” manner.
  • Black Box Testing vs. White Box Testing
    • Black box testing
      • Deals only with the input/output behavior of the unit.
      • The internals of the unit are not considered.
    • White box testing
      • Tests the internal behavior of the unit.
        • execution paths
        • state transitions
  • Unit Testing: Equivalence Testing
    • Minimize the number of test cases by partitioning the possible inputs into equivalence classes .
      • Assumption: For each equivalence class, the unit behaves the same way for any input from the class.
      • Example: A calendar unit that takes a year and a month as input.
        • Year equivalence classes: leap years and non-leap years
        • Month equivalence classes: months with 30 days, months with 31 days, and February (28 or 29 days)
    • Black box test.
  • Unit Testing: Boundary Testing
    • Test the boundaries (the extremes, or “edges”) of the equivalence classes.
    • Calendar component examples:
      • Month 0 and month 13.
      • Months with the incorrect number of days.
      • Invalid leap years (e.g., not a multiple of 4)
    • Black box test.
  • Unit Testing: Monte Carlo Testing
    • Monte Carlo (a famous casino in Monaco) refers to the random generation of input values .
      • Used when the input values for a unit are too numerous and cannot be partitioned into equivalence classes.
      • The test driver generates random values from the input domain and passes them to the unit.
      • Any erroneous output narrows your search for the bug.
    • Black box test.
  • Unit Testing: Path Testing
    • Analyze the execution paths within the unit.
      • Draw a flow graph.
    • Generate input values to force the unit to follow each and every path at least once .
    • White box test.
  • Unit Testing: State Transition Testing
    • Similar to path testing.
    • Analyze the states that the unit can be in.
      • Draw a UML statechart diagram.
    • Generate input data that forces the unit to transition into each and every state at least once .
    • White box test.
  • Unit Testing Tool: JUnit
    • JUnit: A unit testing framework for Java components.
      • A member of the xUnit family of testing frameworks for various programming languages.
    • Open source: Download from
    • JUnit demo.
  • JUnit Demo
    • Class to test:
    package cs160.util; public class Calculator { public double add (double first, double second) { return first + second; } public double subtract (double first, double second) { return second - first; // oops! } public double multiply (double first, double second) { return first * second; } }
  • JUnit Demo (cont’d)
    • Unit test cases:
    package cs160.test; import cs160.util.Calculator; import junit.framework.TestCase ; public class CalculatorTester extends TestCase { public void testAdd () { Calculator calculator = new Calculator(); double result = calculator.add(40, 30); assertEquals (70.0, result, 0.0); } public void testSubtract () { Calculator calculator = new Calculator(); double result = calculator.subtract(40, 30); assertEquals (10.0, result, 0.0); } }
  • Unit Testing Tool: Clover
    • Clover: A unit testing tool that tells you what parts of your code you have actually executed (“covered”).
      • Unfortunately, not open source.
      • Download a 30-day trial from
    • Clover demo.
  • Integration Testing
    • After your code has passed all the unit tests, you must do integration testing to see how your unit works with other units
      • Other units either written by you or by other developers.
    • Create an Ant script that builds the part of the application that contains:
      • Your unit
      • The units that call your unit
      • The units that your unit calls
  • Regression Tests
    • Sad fact of life: When you fix one bug, you might very well introduce one or more new bugs.
      • Tests that have passed previously may now show failures – the system has regressed .
    • Regression tests: Re-run earlier integration tests to make sure they still pass.
      • Do this every time you make a change.
      • Create Ant scripts to run the tests.
      • Schedule regression tests to run periodically automatically, e.g., overnight every night.
  • System Testing
    • Test the entire application within its operating environment.
      • Installation testing
      • Functional testing
      • Performance testing
    • Part of beta testing.
    • Acceptance testing .
      • Signoff by stakeholders, clients, and customers.
  • When can you stop testing?
    • “ Testing can prove the presence of bugs, but never their absence .”
    • Stop testing when:
      • All the regression tests pass.
      • Testing finds only “acceptable” bugs.
        • put on the Known Bugs list
        • have workarounds
      • When you’ve run out of time.
  • Next class meeting…
    • Logging and monitoring
    • Stress testing
    • Test-Driven Development (TDD)
    • Failure analysis
    • Creating test plans