Testing Smalltalk Applications
Upcoming SlideShare
Loading in...5
×
 

Testing Smalltalk Applications

on

  • 2,527 views

 

Statistics

Views

Total Views
2,527
Views on SlideShare
2,526
Embed Views
1

Actions

Likes
0
Downloads
8
Comments
0

1 Embed 1

http://www.slideshare.net 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Testing Smalltalk Applications Testing Smalltalk Applications Presentation Transcript

    • Testing Smalltalk Applications Michael Silverstein SilverMark, Inc. www.silvermark.com msilverstein@silvermark.com 919 363-3946 Copyright 1998, 1999 SilverMark, 1 Inc.
    • Contents • Introduction • Why test? • Economics of testing • When and Where to Apply Testing Effort • Testing in the Development Life Cycle • Testing Strategies • Test Case Design • Testing Products/Tools • Summary • References and Further Reading Copyright 1998, 1999 SilverMark, 2 Inc.
    • About SilverMark, Inc • SilverMark, Inc. was formed in 1996 by former members of IBM’s VisualAge Smalltalk development team • SilverMark provides test and development tools and services for object-oriented systems • Current products: – Test Mentor - automated testing tool for VisualAge, VisualWorks Smalltalk – Connection Detangler - advanced instrumentation, navigation and debugging for VisualAge Smalltalk Connections • Services – Test automation training and consulting – Automated test development outsourcing – Smalltalk development consulting Copyright 1998, 1999 SilverMark, 3 Inc.
    • Goals of Session • Understand role of testing • Understand economics of testing and test automation • Understand where testing fits within the development life cycle • Understand techniques for designing and creating test cases • Gain an awareness of available tools “Success is 99 percent failure” Soichiro Honda Founder, Honda Motor Copyright 1998, 1999 SilverMark, 4 Inc.
    • Target Audience • Smalltalk developers • Application Testers • Project leaders/managers • Anyone who’s performance is measured in any way by the quality of the code delivered Copyright 1998, 1999 SilverMark, 5 Inc.
    • Contents • Introduction • Why Test? • Economics of testing • When and Where to Apply Testing Effort • Testing in the Development Life Cycle • Testing Strategies • Test Case Design • Testing Products/Tools • Summary • References and Further Reading Copyright 1998, 1999 SilverMark, 6 Inc.
    • Potential for Errors in Smalltalk Applications • Polymorphism and dynamic binding expand possible interactions between objects • Weak typing permits sloppy use of interfaces • Subclass may violate subtleties in superclass protocol or state model • Subclass neglects to override superclass method (#copy) • Forgotten initialization code – #new, or lazy initialization Copyright 1998, 1999 SilverMark, 7 Inc.
    • ...Potential for Errors in Smalltalk Applications • Instances fail to be released (memory leak) • Errors introduced during packaging • Intrinsic library subtleties (#add:, #do:) • Iterative, incremental processes imply frequent revisions, low stability Copyright 1998, 1999 SilverMark, 8 Inc.
    • ...Potential for Errors in Smalltalk Applications • Collaboration and distribution of behavior increases dependencies between objects – Coupling between objects implies that a fix for one problem may cause a new problem - sometimes in an unanticipated area. This most often happens when fixes are applied late in development cycle when there is little time for to think about the broader implications of changes or perform extensive regression testing Copyright 1998, 1999 SilverMark, 9 Inc.
    • ...Potential for Errors in Smalltalk Applications • Vendor specific pitfalls – Event not signaled or double-signaled (VA) – Incorrect connection ordering (VA) – Feature name changed or attribute deleted (VA) – Implicit use of Undeclared (VW) Copyright 1998, 1999 SilverMark, 10 Inc.
    • Problem statement • Even with the best people working with the best development tools and languages (like Smalltalk), it is impossible to produce defect- free code the first time. • Most enterprises rely on software for nearly every aspect of their critical operations. • The only way to ensure that software defects do not put your business at risk is to test the software before it is deployed. Copyright 1998, 1999 SilverMark, 11 Inc.
    • Would you Use…? • A bank or insurance company that uses untested software? • An airline that uses untested flight software? • A missile guidance system with untested software? • A heart-lung machine with untested software? Is the software you deliver any less critical to your business or your customers’ business? Copyright 1998, 1999 SilverMark, 12 Inc.
    • Do you want your customers spending time…? – discovering a defect • determining if the defect is a user error or a true defect – reporting the defect, speaking with customer support – working less effectively, or not at all, due to loss of utility – working around the defect – installing updates – telling coworkers/friends/family about problems in your product – looking for alternate vendors Copyright 1998, 1999 SilverMark, 13 Inc.
    • Some people never learn “Seven out of ten new software systems fail in some way upon deployment”, according to the Standish Group International, a consulting firm in Dennis, Mass. Forbes Magazine, Shake those bugs out, May 18, 1998 Copyright 1998, 1999 SilverMark, 14 Inc.
    • Contents • Introduction • Why Test? • Economics of testing • When and Where to Apply Testing Effort • Testing in the Development Life Cycle • Testing Strategies • Test Case Design • Testing Products/Tools • Summary • References and Further Reading Copyright 1998, 1999 SilverMark, 15 Inc.
    • Cost of Software Defects Defect Cost Develop Distribute Copyright 1998, 1999 SilverMark, 16 Inc.
    • The Cost of Late Testing • Each defect has cost – Cost to report defect – Cost to fix defect – Cost to distribute fix • The cost of each defect increases: – the further it appears in the development cycle – the greater the number of deployed copies Copyright 1998, 1999 SilverMark, 17 Inc.
    • Defect cost example* Phase # Defects Repair hours Avg. time/defect Unit test 31 249 8 Integration Test 160 797 5 System test 122 1313 11 Beta test 22 440 20 Post-release 12 206 17 *From [5.5] Copyright 1998, 1999 SilverMark, 18 Inc.
    • Manual Testing How long do you think it would take to use all of the important parts of your product just once? Incorporation of automated testing has reduced testing time from 30-35% of a development life cycle to 23-28%* From metrics gathered in [5.5] Copyright 1998, 1999 SilverMark, 19 Inc.
    • Manual Testing • Advantages: – Low up-front labor cost – No tools required • Disadvantages – High repeat labor cost • Execution • Results logging and analysis – Lower overall test coverage – Repeatability difficult • Quality of testing decreases over time – Skills transfer difficult for low-level tests Copyright 1998, 1999 SilverMark, 20 Inc.
    • Automated Testing • Static – Tools analyze code, search for inconsistencies, violations of standards • Dynamic – Automated test cases that exercise system to force defects to manifest themselves as invalid/error states Copyright 1998, 1999 SilverMark, 21 Inc.
    • The Cost of Testing All tests are work products that have cost: test scripting time + test development time + (number of executions * (test execution time + results logging and analysis time)) Copyright 1998, 1999 SilverMark, 22 Inc.
    • Manual vs. Automated Testing Up-front test Repeat cost development cost • Manual testing – Low up-front cost – High repeat cost • Automated testing – Move testing cost to the front where it is not repeated – Use tools to automate test creation, lowering cost – Increased ROI • Test early and test often to catch defects when they occur and are least costly to fix. Copyright 1998, 1999 SilverMark, 23 Inc.
    • Minimizing Testing Cost Through Reuse • Testing tool/methodology should support creation and execution of small, reusable test components • This makes maintaining tests easy as your system under test evolves – Create small, specific tests once, then reuse – Unplug old test, plug in new one Copyright 1998, 1999 SilverMark, 24 Inc.
    • The Automated Testing Big Picture • Lower defect cost through earlier discovering . • Spend time extending test coverage rather than repeating tedious manual tests. • ROI increases as test executions increase • Reduce the skill level required to run tests • Fixes to problems are easier to verify because the scenarios used to detect them are automated • Streamline multi-platform testing • Gather metrics automatically Copyright 1998, 1999 SilverMark, 25 Inc.
    • Contents • Introduction • Why Test? • Economics of testing • When and Where to Apply Testing Effort • Testing in the Development Life Cycle • Testing Strategies • Test Case Design • Testing Products/Tools • Summary • References and Further Reading Copyright 1998, 1999 SilverMark, 26 Inc.
    • Finding Balance The art in testing is finding a balance between discovering defects and minimizing cost – Risk management – Broad Vs. narrow tests – Depth of testing – Manual Vs. automated tests – Frequency of testing Copyright 1998, 1999 SilverMark, 27 Inc.
    • Goals of Testing • Find defects - any test that never finds a defect is a waste of time – It must be looking for defects in the wrong places – Test design is critical to effective testing • Tip: Assume there are defects and focus on uncovering them rather than proving the program is correct Copyright 1998, 1999 SilverMark, 28 Inc.
    • Test Effectiveness • The ultimate measure of the effectiveness of testing is whether defects get fixed • Links in the testing chain: – Find defects – Communicate defects – Fix defects – Verify fixes Copyright 1998, 1999 SilverMark, 29 Inc.
    • Depth of Testing • No testing • Ad-hoc - hit and miss as time allows • Sanity check - base, minimum • Everything - neither practical nor possible • Risk weighted testing - Focus on areas that are most critical to customers and have a high likelihood of defects. This is where your risk is greatest Copyright 1998, 1999 SilverMark, 30 Inc.
    • When to Test • Extreme programming mantra: “Continuous integration, relentless testing” – Create tests before code; code is done when tests pass • Testing additions/changes to legacy systems with no tests – Test new/changed areas • Questions to ask when adding or changing a service: – When is service ready to be tested? – What dependencies on other services does service have? • When are required services available? – Assuming required services are not immediately available, what is the risk of waiting to test service until required service is available vs. stubbing required service? Copyright 1998, 1999 SilverMark, 31 Inc.
    • Where to Test - Looking for Clues • Most defects are unimaginative • Use various sources to find clues to where the defects may be hiding. Clues come from: – Specification – Design – Code – Problem history – Tester experience – Complexity metrics Copyright 1998, 1999 SilverMark, 32 Inc.
    • Identifying Likely Areas of Defects – Complex code – High degree of concurrency • Shared objects • Resilience to interruption – Many interdependencies/High coupling – Varying behavior in the presence of many states – Novelty of algorithms – System/Hardware interfaces – Code maturity – Novel use of an existing framework – Developer experience Copyright 1998, 1999 SilverMark, 33 Inc.
    • Contents • Introduction • Why Test? • Economics of testing • When and Where to Apply Testing Effort • Testing in the Development Life Cycle • Testing Strategies • Test Case Design • Testing Products/Tools • Summary • References and Further Reading Copyright 1998, 1999 SilverMark, 34 Inc.
    • Testing Approach - Black Box • Functional focus • Specification driven, examines system from the services it provides via its interface • Tests often based on use-cases – Stimulus Response • Each use case naturally requires the service that it tests to be fully implemented – Works well for UI tests, but is applicable to any subsystem with a public interface. Copyright 1998, 1999 SilverMark, 35 Inc.
    • Testing Approach - White/glass box • Structural focus - unit and cluster testing • Based on knowledge of program design/implementation with the goal of exercising paths and states within the object • Performed in parallel with code development – Can be started the first day of development Copyright 1998, 1999 SilverMark, 36 Inc.
    • Testing is a Continuous Process • Most Smalltalk programmers already do this with DoIts in Transcript or Workspace(s). • Typically throwaway code – Better to insert into an automated test under version control that can be loaded and executed by anyone Copyright 1998, 1999 SilverMark, 37 Inc.
    • Testing During Design • Include test cases as part of design • Scenarios: – Precondition – Postcondition – Inputs (minimal set, boundaries) – Expected outputs – State transitions – Constraints – Look for failures of omission • Review test design along with system design Copyright 1998, 1999 SilverMark, 38 Inc.
    • Design for Testability • To test a system, you must be able to control its input (controllability) and observe its output or state (observability). • A highly fault-tolerant system inhibits testability by reducing the ability to observe defects, forcing you to test at a very low level • Highly concurrent systems inhibit testability by reducing controllability through severe timing constraints. • Classes may have dependencies on supporting classes, which limits their testability while those supporting classes are incomplete – Use wavefront integration contracts to manage dependencies Copyright 1998, 1999 SilverMark, 39 Inc.
    • Observability • The ability to access all objects under test, and their states • Design decisions can strongly affect subsequent ease of testing • Example: – Two tightly-coupled collaborating classes that together participate in an asynchronous transaction. – Problem: How to observe when transaction has completed? Copyright 1998, 1999 SilverMark, 40 Inc.
    • Observability Example - Asynchronous Transactions DING! Is it here yet? Is it here yet? Is it here yet? Polling Notification Copyright 1998, 1999 SilverMark, 41 Inc.
    • Observability Example - Asynchronous Transactions Test Code Client Server Stimulus Request Processing Asynchronous response •Poll application state How does test code know •Hook in to generic event when asynchronous notification mechanism. This transaction completes? requires a conscious design decision. Copyright 1998, 1999 SilverMark, 42 Inc.
    • Observability Example - Overzealous Exception Handling • There is the temptation to perform large blocks of code under the aegis of an exception handler. This protects the system from unanticipated exceptions, but also obscures defects in the form of exceptions during testing. • This puts more responsibility on observing the outputs and state – Defects become evident indirectly, making debugging difficult – Defects may not be caught in testing if tests are incomplete Copyright 1998, 1999 SilverMark, 43 Inc.
    • ...Observability Example - Overzealous Exception Handling [ something ] when: ExAll do: [ :sig | sig exitWith: nil ] Replace with some code that promotes observation of exception during development and testing. [ something ] when: ExAll do: [ :sig | System isRuntime ifTrue: [ sig exitWith: nil ] ifFalse: [ sig handlesByDefault ]] Copyright 1998, 1999 SilverMark, 44 Inc.
    • ...Observability • Smalltalk provides mechanisms that promote observability – Using Smalltalk with ENVY class extensions removes need to expose private data for the benefit of testing • Better than #instVarAt: – Windowing system often gives addressability to windows • In VA Smalltalk: CwShell allShells • In VW Smalltalk: ScheduledControllers scheduledControllers – Behavior>>#allInstances (not recommended) Copyright 1998, 1999 SilverMark, 45 Inc.
    • Controllability • Strongly coupled collaborating classes are more difficult to test by themselves because they require their collaborators – Double-dispatching – ‘Private’ protocols between two classes – Where possible, design generic interfaces • Revisit design - perhaps encapsulation should be restored through refactoring Copyright 1998, 1999 SilverMark, 46 Inc.
    • Testing During Code Development - Goals • Verify correct initial instance state • Verify class/instance state transitions • Verify correct operation at the class/instance interface level • Verify correct cluster operation • Force class states and sequences of stimuli that would otherwise be difficult within a broader context Copyright 1998, 1999 SilverMark, 47 Inc.
    • Testing During Coding • Use ENVY-based code reviews • Incrementally create and execute unit and cluster tests with the code under test • Code isn’t complete and available to be integrated with build until it has passed all tests • Broad tests on incomplete or unstable code make it difficult to test because a single failure can create a bottleneck to further testing. – The challenge is to find balance between specific, high granularity tests and broad, low granularity tests. Specific ones provide more control require more development effort. Copyright 1998, 1999 SilverMark, 48 Inc.
    • ENVY-based Code Reviews 1) Class owner versions classes and applications as “[MMDDYY] To Be Reviewed” 2) Reviewers read, understand, modify code while class owner continues development. 3) Existing tests help reviewer confirm that both the reviewer and the class owner understand the intent of the code because it serves as a usage example 4) Opportunity to verify that models are accurately mapped to implementation 5) Excellent way for mentors to impart experience 6) Class owner browses differences to find modifications and suggestions 7) Class owner discusses suggestions with reviewers wither formally or informally 8) Class owner integrates changes, as appropriate Copyright 1998, 1999 SilverMark, 49 Inc.
    • Integration Testing - Goals • Ensure that the build is stable enough to be distributed to developers and system testers • Perform class and cluster testing within the context of broader system states • Verify interactions between subsystems. Copyright 1998, 1999 SilverMark, 50 Inc.
    • Regression Testing • Ensure the system has not regressed between builds • Test fixes – Does it fix problem for all cases? • May fix problem for some but not all inputs – Does it break something else? • Unanticipated coupling, especially if person who fixes problem is not the original code developer • Regression tests accumulate due to: – tests for new function – tests for specific defects discovered Copyright 1998, 1999 SilverMark, 51 Inc.
    • ...Regression Testing • Refactoring – Software entropy: structure deteriorates as function and fixes are added, often leading to rework to reinforce underpinnings – Sometimes a fix causes more changes than one might expect. Refactoring to enable a fix often introduces lightly thought out architectural changes late in the development cycle. • Optimizations - cleaner, faster code, but possibly incorrect under some conditions Copyright 1998, 1999 SilverMark, 52 Inc.
    • System testing • Goal: To verify that the system fully expresses the requirements • Functional testing from outside of the system through its primary port boundary – Usually through the user interface – Best accomplished via capture/replay testing tools • Tests based on use cases • Performed by persons who may be unfamiliar with Smalltalk Copyright 1998, 1999 SilverMark, 53 Inc.
    • ...System Testing • Ideal is to execute one complete suite of tests for each build, and then have time to add more tests, but without automation this may not be practical • System tester skills: – Understand requirements and operation of the system under test – Understand how to read walkback stack to be able to discriminate between similar manifestations of the same problem – Understand the overall architecture Copyright 1998, 1999 SilverMark, 54 Inc.
    • Testing Flow During Development Integration System Unit Test Test Test • Unit test – Developers test at class and cluster level. – A class edition is not available for release until it has passed its tests (as agreed upon in design) • Integration test – Test when released components are integrated (before or after) – Tests are harvested from unit and system level • System test – Domain experts test in packaged image Copyright 1998, 1999 SilverMark, 55 Inc.
    • Testing Before Release • Goal: Stabilize existing function • The threshold for releasing code becomes exponentially higher as delivery milestone is approached. • Run all tests. Look for new or overlooked problems • Review deferred problems – Test the process Release General development Code lockdown Copyright 1998, 1999 SilverMark, 56 Inc.
    • Testing After Deployment “Software products are never released-they escape!” [11] Copyright 1998, 1999 SilverMark, 57 Inc.
    • Testing After Deployment • Keep looking for problems for a while, especially if schedule was tight – Better to have a fix available for customers when they discover problem • Make sure support group is aware of any new problems – Preparation for next or maintenance release – Clean up test cases for next round Copyright 1998, 1999 SilverMark, 58 Inc.
    • Testing After Deployment - Runtime Diagnostics • Tests delivered as part of the packaged image • Triggered by: – Command-line option or other ‘back door’ – Continuous operation – Error/exception • Driving forces: – Invariants – Assertions • Active: Execute tests that test states and transitions • Passive: Verify expected states and values Copyright 1998, 1999 SilverMark, 59 Inc.
    • Setting up Automatic Diagnostics • In VAST, add new leaf to EpRuntimeStartup hierarchy • Override #reportError:resumable:startBP: with a version that kicks off diagnostics (nonrecursively) • Return your subclass in the #startUpClassName method of the packaging spec. Copyright 1998, 1999 SilverMark, 60 Inc.
    • Runtime Diagnostics - Goals • Provide clues to debugging deployed defects • Catch defects in vivo • Early warning of system degradation – Unreleased objects – Rounding/tolerance errors Copyright 1998, 1999 SilverMark, 61 Inc.
    • Development Practices • Always load config maps into a clean image – Defects may be obscured by artifacts of development – Possible dependency on global state – Test #loaded code – Test subapplication lineup – Test application prerequisites – Centralized build/integration test promotes this Copyright 1998, 1999 SilverMark, 62 Inc.
    • Development Practices - Packaging • Usually saved until the end - bad idea • Unimplemented selector problems • Application prerequisite problems • Class variable initialization problems • Problems using development image specific protocols Copyright 1998, 1999 SilverMark, 63 Inc.
    • ...Development Practices - Packaging • Packaging optimizations to reduce footprint. Begin testing this early so problems may be found early • Packaging specs – Specific rules for excluding or including instances, classes, methods, instance variables, pool values, and initializing class variables, globals, etc. in the packaged image – Very prone to error Copyright 1998, 1999 SilverMark, 64 Inc.
    • Packager Diagnostics • There are lots of ‘clever’ tricks that will trip up the runtime packager – Smalltalk at: – #perform: • Packager reference browser shows: – Methods not implemented in the packaged image – Globals that do not exist in the packaged image Copyright 1998, 1999 SilverMark, 65 Inc.
    • Defect Management Process • Locating defects is half the battle. The payoff is in getting those defects fixed • Components: – Reporting/communicating – Problem ownership – Problem database – Fix integration and deployment Copyright 1998, 1999 SilverMark, 66 Inc.
    • Defect Management - Inhibitors • Poor problem descriptions – Too little detail – Too many unrelated details – Problem not repeatable – Multiple problems in same report – Same problem reported multiple times – Emotionally charged reports – Loss of defect reporter credibility Copyright 1998, 1999 SilverMark, 67 Inc.
    • Problem Reporting Information • Goal: Communicate the problem effectively and give the developer a head start on determining the cause, without spending too much time doing something less efficiently than a developer might – Tester has advantage of having the problem right in front of them – Developer has advantage of understanding the code • Code level (config map or app version) • Operating system, if multi-platform and problem is not likely to be found on all platforms • Stack, if exception Copyright 1998, 1999 SilverMark, 68 Inc.
    • ...Problem Reporting Information • Steps to recreate problem – Precondition – Stimulus – Expected Vs. actual response • Repeatable: Yes/No – If not repeatable, give best details possible about circumstances surrounding occurrences – If developer can’t repeat problem due to timing, description may help locate weakness • Owner - person responsible for fixing problem • Other - problem number, date, time, etc. Copyright 1998, 1999 SilverMark, 69 Inc.
    • Test Metrics • Goal: Measure effectiveness of testing Some useful metrics and likelihood of unfound rage % Cove 100% defects ts • Number of methods ll tes a • Test coverage progra m size • Tests executed De s – Pass/fail th od fec Me ts # ts – Stability es fou # T nd/ • Complexity metrics tes 0 t cyc le Copyright 1998, 1999 SilverMark, 70 Inc.
    • Coverage Metrics - Method Vs. Path – Method coverage • A tool for showing where more testing is needed, rather than where it is complete (glass half empty Vs. half full) – Path coverage • Does 100% path coverage mean you’re done? • Consider: – (a > 1 & b = 0) ifTrue: [ x := x / a ]. – (a = 2 | x > 1) ifTrue: [ x := x + 1 ]. • for Values of a=2, b=0, and x=3 path coverage is 100% • But... – If '&' should be '|', the result would be the same. – If 'x > 1' should be 'x > 0', the result would be the same. – Logic errors may still remain undetected even with 100% code path coverage Copyright 1998, 1999 SilverMark, 71 Inc.
    • ...Coverage Metrics • Don’t be fooled into a false sense of security from high values • Coverage metrics say nothing about state – To truly reach 100% test coverage you need to have 100% path coverage over 100% of relevant precondition states C A B D = method Copyright 1998, 1999 SilverMark, 72 Inc.
    • Contents • Introduction • Why Test? • Economics of testing • When and Where to Apply Testing Effort • Testing in the Development Life Cycle • Testing Strategies • Test Case Design • Testing Products/Tools • Summary • References and Further Reading Copyright 1998, 1999 SilverMark, 73 Inc.
    • Model vs. View Testing • Goal: Find a balance between low-level unit testing (high granularity) and high-level view testing (low granularity) • Forces: – Models are usually available for testing before views – Earlier testing catches problems sooner – It is usually easier/faster to create view tests (with the right UI record/playback tool) – View tests usually cover more code/time invested in creating tests – Not testing views ignores an entire subsystem Copyright 1998, 1999 SilverMark, 74 Inc.
    • Model/View Testing Rules of Thumb • If there is significant latency between appearance of view after model, test the model. Note: Still need to test model-view connection if there is risk in it. • If other components in addition to a single view depend on a model, test the model • If a high degree of control is required, test from the model • If model and view are highly dependent, test from view – can be considered same subsystem • If time is of the essence (end of project), test from the view • For deep object verification, implement object verification at model level. Note: Thiss may be used to support view testing. Copyright 1998, 1999 SilverMark, 75 Inc.
    • Testing Client/Server Applications •Testing the Server –Test in server image •Test server public interface •Replace transport layer with test cases that drive and verify the response of transactions –Good if you can assume transport layer already well tested –Test in client image •Test from client side use cases that are known to use server use cases •Test with simulated transactions from server use cases •Testing the client –Business as usual, but may need to coordinate asynchronous server transactions Copyright 1998, 1999 SilverMark, 76 Inc.
    • Object Testing Approaches • Types of Objects – Passive - no change in state or little change in behavior based on state • ‘Struct’ objects – Little to test except for their contents - low priority • Objects that delegate – Test to ensure that all delegation paths perform correctly • Transformational objects – Test with equivalence values and their expected responses – Active - state dependent behavior • Finite State Machines • State-dependent transformational objects – Test with equivalence values and their responses under states that are not equivalent – Composite • Cluster testing approach • Test from object interface • Ensure correct message propagation interactions Copyright 1998, 1999 SilverMark, 77 Inc.
    • Testing Finite State Machines • Goals: – Find missing states – Find extra states – Find missing or wrong actions – Find invalid transitions – Ensure developer’s assumptions about equivalent states are true • States are equivalent if every sequence of inputs starting from one state produces exactly the same sequence of outputs when started from the other state • Narrow down events to a sample from each equivalence set to drive input events • Test all transitions under equivalence sets Copyright 1998, 1999 SilverMark, 78 Inc.
    • Testing Independent Finite State Machines • Goal: Ensure that transitions and actions are independent of the state of other FSMs – Problem are not usually located during class-level testing because tests are performed on individual instances – Problems may often timing-related – Test by exercising each transition in the presence of other FSM’s state. • Variations may get overwhelming Copyright 1998, 1999 SilverMark, 79 Inc.
    • Example FSM Ready Receiving send request / increment ID do/wait [ ID ma tc hes reply received / wake up [ ID ]/n mis otif ma y re tch que ]/r sto ais r e exc ept Received ion do/check ID Data Requestor State Transition Diagram Copyright 1998, 1999 SilverMark, 80 Inc.
    • Testing Asynchronous Server Transactions • Problem: Need to wait for response from stimulus – Poll for response • Check state of system for response – Results must be readily accessible • Time out if response not fast enough • Advantage: Easy to set up • Disadvantages: – Requires visibility to end state – Reference may be indirect – Polling loads processor Copyright 1998, 1999 SilverMark, 81 Inc.
    • Polling for Asynchronous Response checkCondition: conditionBlock interval: intervalInMilliseconds timeout: timeoutInMilliseconds ”Simple wait for a condition to be met over a specified time, checking at a given interval. conditionBlock : A zero-argument block that tests the condition and returns a Boolean. intervalInMilliseconds : Approximate interval to wait between executing conditionBlock. timeoutInMilliseconds : Approximate total time to wait for the condition to be met." | startTime conditionMet nextCheck | startTime := Time millisecondClockValue. conditionMet := conditionBlock value. [conditionMet or: [(Time millisecondClockValue - startTime) > timeoutInMilliseconds]] whileFalse: [ nextCheck := Time millisecondClockValue + intervalInMilliseconds. [Time millisecondClockValue < nextCheck] whileTrue: [CwAppContext default readAndDispatch; sleep]. conditionMet := conditionBlock value = true]. ^conditionMet Copyright 1998, 1999 SilverMark, 82 Inc.
    • ...Testing Asynchronous Server Transactions • Hook in as notification dependent – Block on semaphore with timeout – Advantage: • Easier to collect timing • Probably easier to access returned result – Disadvantages: • Requires add/remove dependency • Must guard against deadlock Copyright 1998, 1999 SilverMark, 83 Inc.
    • Testing Concurrent Processes • Degrees of concurrency - consider whether process is likely to yield – A fork does not necessarily imply a process interruption, but you have to think about when it be dispatched – UI interactions – Asynchronous external calls • Behavior is very timing dependent, subject to race conditions • The act of measuring alters the timing of what is being measured (Heisenberg’s principle) • Recommended approach: – Desk check • Find many problems playing ‘what if?’ • Find ‘danger’ areas, scenarios – Try independent FSM approach – Vary load Copyright 1998, 1999 SilverMark, 84 Inc.
    • Testing Exceptions • Are the correct exceptions raised under the expected circumstances? • Are all possible exceptions handled? – Beware of errors of omission • Be especially wary of exceptions from other frameworks • This is where knowledge of Smalltalk helps system testers – For example, knowledge of possible file system exceptions Copyright 1998, 1999 SilverMark, 85 Inc.
    • Testing Exceptions execute: aBlock expectingException: anExceptionalEvent “Execute @aBlock and return whether @anExceptionalEvent was raised ” | exceptionRaised | exceptionRaised := false. aBlock when: anExceptionalEventOr do: [:sig | exceptionRaised := true. sig exitWith: nil]. ^exceptionRaised Copyright 1998, 1999 SilverMark, 86 Inc.
    • Contents • Introduction • Why Test? • Economics of testing • When and Where to Apply Testing Effort • Testing in the Development Life Cycle • Testing Strategies • Test Case Design • Testing Products/Tools • Summary • References and Further Reading Copyright 1998, 1999 SilverMark, 87 Inc.
    • Test Case Requirements • Create and hold object(s) under test • Maintain test state • Apply stimuli to object(s) under test • Validate and log responses • Sequence tests • Trap and log exceptions • Present results at desired granularity Copyright 1998, 1999 SilverMark, 88 Inc.
    • Self Tests • Usually coded as class methods that define tests, with supporting methods on instance side – Within ENVY extensions • Advantages: – Tests have direct access to internal state of object under test – Tests are local to the class under test, so they are easy to find – Tests benefit from the inheritance structure of the class under test • Disadvantages: – Cannot change the shape of class under test for maintaining test state – Difficult to give a class broader test behavior – Supporting test methods may overwhelm methods within class – Does not scale for cluster testing Copyright 1998, 1999 SilverMark, 89 Inc.
    • Test Case Reuse with ‘Uses’ Vs. ‘Extends’ Uses Cases • Uses - mechanism for partitioning use cases into reusable scenarios • Extends - mechanism for indicating variations on normal behavior <<uses>> Enter Customer Log On Invalid Log On <<extends>> Operator Query Customer <<uses>> Use Case Diagram Class Diagram Abstract Test Case Enter Query Log Invalid Customer Customer On Log On Copyright 1998, 1999 SilverMark, 90 Inc.
    • Test Data • Derive a minimal set of test data to use in tests – Identify variations – Identify constraints between variations – Create a test oracle to serve up test data • Can also provide mappings between input, stimulus and output Copyright 1998, 1999 SilverMark, 91 Inc.
    • Test Data • Test data repository - Gold standard for test input and output from: – Test database – Test oracle object • Values encoded in code under ENVY version control • APIs to return test data – UI capture • Values encoded with UI playback code under ENVY version control (assuming Smalltalk code generation) – Random Test Data • A tool that could deliver expected outputs based on input values is impractical because it would replace the code under test Copyright 1998, 1999 SilverMark, 92 Inc.
    • Test Data Oracle • Provides access to test data – Database, random generation, from methods, etc. • Presents APIs to iterate over test data – For example, #allPersonsAndPoliciesDo: t ob jects – Extract data item(s) and plug into passed blocks Inpu • May be used to tie together input :: stimulus :: expected response ses Te st ca Test Oracle Test Execution Engine cts bje ut o outp ed ect Exp Copyright 1998, 1999 SilverMark, 93 Inc.
    • Tests Based on Stimulus Variations • One for each stimulus variation • Each scenario represents the actions and verifications to perform on the given test data • Expects input and expected output as parameters Copyright 1998, 1999 SilverMark, 94 Inc.
    • Test for all Variations • Uses Oracle to iterate over test data and the variations to perform. • Uses test identifier given by the Oracle as the name of a stimulus variation test to run Copyright 1998, 1999 SilverMark, 95 Inc.
    • Equivalence Partitioning • Problem: How to determine minimum set of input values to use • Solution: Divide the input domain into sets of equivalent values from which test cases can be derived. • Equivalence values test the same thing – Variations within the same continuum – Similar input, actions and output – If one catches a defect, the other should, as well • Benefits: – Prevent tests from being overwhelmed by variations – Ensure that developer’s assumptions about equivalency are correct Copyright 1998, 1999 SilverMark, 96 Inc.
    • Equivalence Partitioning • Equivalence sets – Smallest Float to largest Float – Lowest valid value to highest valid value Copyright 1998, 1999 SilverMark, 97 Inc.
    • ...Equivalence Partitioning alpha char read / clear temp Reading Name separator char read / store temp Searching for name alpha char read / concatenate separator char read to temp invalid char read invalid char read Error All alpha chars are equivalent [ a..z, A..Z] All separators are equivalent [space, tab, cr, lf ] do/report error Everything else is invalid “Name reader - read and store space delimited names” state diagram (UML) Copyright 1998, 1999 SilverMark, 98 Inc.
    • Boundary Values • Complement equivalence values because they concentrate on values at the edges • max, max - 1, max + 1 • min, min + 1, min - 1 equivalence values min-1 min min+1 max-1 max max+1 Copyright 1998, 1999 SilverMark, 99 Inc.
    • Variation Groupings • Data variations that are only valid when present with certain other variations • Usually determined by business logic Copyright 1998, 1999 SilverMark, 100 Inc.
    • Reference Object Capture • Automate object test creation by capturing comparison reference objects. result := out convert: input. self referenceAt: ‘conversion’ compare: result referenceAt: name compare: anObject ^(self expectedValueAt: name ifAbsent: [ self expectedValuesAt: name put: anObject. ^true ]) = anObject. Copyright 1998, 1999 SilverMark, 101 Inc.
    • Random Test Data • Use random test data as a mechanism for generating a gold standard sample set - not as a mechanism for varying test input from test run to test run • Give classes responsibility for returning random instances of themselves: – #randomNew – #randomNew: someConstraint • Either store random values persistently or always initialize random generator with the same seed: VisualWorks VisualAge Random | randomStream | fromGenerator: 1 randomStream := super new. seededWith: 1234 randomStream seed2: 1234 asFloat; basicNext; shuffleArray: (Array new: randomStream shuffleSize). 1 to: randomStream shuffleSize do: [ :index | randomStream shuffleArray at: index put: randomStream basicNext]. randomStream seed1: randomStream seed2. ^randomStream Copyright 1998, 1999 SilverMark, 102 Inc.
    • Random Test Data - Example Person>>#randomNew randomNew ^self new name: PersonName randomNew; birthDate: BirthDate randomNew; address: StreetAddress randomNew; height: (SmallInteger randomNewMin: 36 max: 200) yourself Copyright 1998, 1999 SilverMark, 103 Inc.
    • Generating Test Data Methods from Random Values generate: quantity randomPeopleIntoClass: aClass application: anApp "Generate random people into test data oracle method" | sourceStream people | people := OrderedCollection new. quantity timesRepeat: [ people add: Person randomNew ]. sourceStream := WriteStream on: ( String new: 1024). sourceStream nextPutAll: 'randomPeople'; cr; nextPut: $^ . people storeOn: sourceStream. sourceStream nextPut: $. . ^(aClass compile: sourceStream contents notifying: Transcript ifNewAddTo: anApp categorizeIn: #('Test Data')) notNil Copyright 1998, 1999 SilverMark, 104 Inc.
    • Class Invariant • A set of assertions that express general consistency constraints that apply to every class instance as a whole, regardless of stimulus • Tests that stimulate an object assume the invariant to remain constant as a precondition and post condition • The invariant for a class must be a superset of the invariant for its superclass – Invariants down the hierarchy are anded Copyright 1998, 1999 SilverMark, 105 Inc.
    • Class Invariant - Examples Queue>>#invariant “Ensure that size is within bounds” (self size < 0 or: [ self size > self maxSize ]) ifTrue: [self error:‘Invariant violation’] SavingsAccount>>#invariant “Ensure savings account conforms to invariants” ^(self customer invariant and: [ (self balance >= 0) and: [ … ] ]) ifTrue: [ true ] ifFalse: [self error:‘Invariant violation’] Copyright 1998, 1999 SilverMark, 106 Inc.
    • Test Structuring • Structure motivated parallel hierarchy of classes – Best when the test is focused on scenarios for a single class (unit tests) – Goal is to mimic hierarchy of classes under test to follow the same inheritance pattern of specialization and extension in test cases as in class under test • Function motivated hierarchy of test classes – Use this when test encompasses many objects and subclassing to follow class under test hierarchy is not meaningful. – Structure of hierarchy reflects functional perspective • Reuse motivated composition through componentry – Largely used to create broader sequences of more specific tests – Usually applied to integration tests Copyright 1998, 1999 SilverMark, 107 Inc.
    • Parallel Test Architecture • Hierarchy of test classes (fixtures) that roughly parallels that of classes under test Test Case Class A Class A Test Case Class B Class C Class B Class C Test Case Test Case Copyright 1998, 1999 SilverMark, 108 Inc.
    • ...Parallel Test Architecture • Advantages: – Not tied to class under test (separation of responsibilities) • Can scale to clusters – Not completely tied to hierarchy under test – Test classes are reusable components – Not constrained to exact hierarchy • Disadvantages: – Still need to extend class under test to access private state – Need to be more mindful of structural changes in class under test – Number of classes in image increases Copyright 1998, 1999 SilverMark, 109 Inc.
    • Example Functional Test Problem: Class Hierarchy Abstract Auto Policy tests How to test functional variations that have common and specific components Scenario: Create new policy Scenario: Add new vehicle self subclassResponsibility Motivating example: Scenario: Remove vehicle Auto policy system with specific logic/behavior/interface for each of 50 US states. Solution: Common Auto Policy tests Create test hierarchy rooted in abstract class that defines tests that are Scenario: Create new policy implemented at increasing levels of Scenario: Add new vehicle Implementation common to all US states specificity down the hierarchy Scenario: Remove vehicle Alaska Auto Policy tests Alabama Auto Policy tests State specific Implementation executes Scenario: Create new policy Scenario: Create new policy super implementation and then own Scenario: Add new vehicle Scenario: Add new vehicle implementation Scenario: Remove vehicle Scenario: Remove vehicle Copyright 1998, 1999 SilverMark, 110 Inc.
    • Cluster Tests • Second level of integration. Like class testing but with focus on testing correct interactions among instances of classes within the cluster • Test from the cluster’s public interface - treat it as a single service provider that may require interactions across multiple classes’ public interfaces • Use cases for cluster are a subset of those of the classes within the cluster • Parallel test architecture still applicable, but more from the standpoint of exploiting reuse within test cases • Create and hold relevant instances from test case instance. Apply stimuli to cluster instances and check response and postcondition state Copyright 1998, 1999 SilverMark, 111 Inc.
    • ...Cluster Tests Cluster Interface Test Case Copyright 1998, 1999 SilverMark, 112 Inc.
    • Contents • Introduction • Why Test? • Economics of testing • When and Where to Apply Testing Effort • Testing in the Development Life Cycle • Testing Strategies • Test Case Design • Testing Products/Tools • Summary • References and Further Reading Copyright 1998, 1999 SilverMark, 113 Inc.
    • Whether to Test from Inside or Outside of the Image? There are quite a few testing tools on the market that operate on applications developed for various operating systems In independent tests, none of these tools have proven themselves strongly applicable to testing Smalltalk applications because: –They can not see domain (model) objects –They do not recognize emulated widgets –Their ability to gather metrics is limited –Their scripting languages are far from Smalltalk •Cannot take advantage of the power of objects Copyright 1998, 1999 SilverMark, 114 Inc.
    • SmallCycle - Unity Software Systems • Available from http://www.unity-software.com • Problem tracking, workflow and patch delivery system • Packaged as small stub in the deployed image • Captures problems or requests for enhancements and forwards them to development team via 3rd party store and forward technology, such as Lotus Notes. • Problems are assigned to developers as tasks • Fixes are delivered in the form of patches to the runtime image Copyright 1998, 1999 SilverMark, 115 Inc.
    • SmallCycle Object Model Copyright 1998, 1999 SilverMark, 116 Inc.
    • ...SmallCycle Object Model • Products contain 0 or more product releases. • Product releases describe a particular code base that has been shipped into the field. Each product release contains a list of versioned VisualAge Applications. Product releases are the target against which tasks are defined. Product releases can also have patches applied to them. Product releases can contain components, which are simply other product releases. • Patches represent sets of code changes that are deployed into the field to correct problems with a particular product release. A patch is always relative to a product release. A patch contains one or more fixes. • Each fix represents the correction of a problem associated with a particular product release. Each fix is associated with a task and corrects the problem(s) described in the task. Each fix also contains knowledge of the code that was changed relative to the product release against which the fix's associated task was reported. • Task is a work transaction item associated with a product release. Task carry a variety of information and retain the history of changes made to them. • Tasks are the main focus of the SmallCycle workflow and are also the main focus of SmallCycle users which are not application developers. Copyright 1998, 1999 SilverMark, 117 Inc.
    • Static Testing Tools • Advantages – Require very little initial setup effort – Easy way to find some defects or point to where defects may be hiding • Disadvantages – Cannot find dynamic, behavioral problems – No concept of state – Results require interpretation Copyright 1998, 1999 SilverMark, 118 Inc.
    • Useful Static Tests • Missing #wasRemovedCode • Duplicate pool dictionaries – Already declared by superclass • Extends base class – Probably ok if selector is prefixed • Missing dependent method • e.g. #= without #hash • Class does not override #subclassResponsibility • Subclassing base class – Potential for overriding private protocol Copyright 1998, 1999 SilverMark, 119 Inc.
    • ...Useful Static Tests • Code or temporaries that are: – unused – read before written – written but not read – not optimized • Identical to inherited method – Missing #yourself in assignment expressions • Missing primitive fail code • Method not implemented in superclass – Message sent to superclass explicitly • References outside prereq chain • References development class Copyright 1998, 1999 SilverMark, 120 Inc.
    • ...Useful Static Tests • References own class – Limits ability for subclasses to refine behavior • Reimplements system method – Like #basicAt:, #basicNew • Sends system method • Message sent but not implemented – Within visible classes – Can use packager directives • Specialized method does not call superclass • Method overrides superclass that sends #shouldNotImplement Copyright 1998, 1999 SilverMark, 121 Inc.
    • ...Useful Static Tests • Too many consecutive messages – Can indicate poor encapsulation – Type of structure might limit dynamic testability • Unsent method, missing arguments – Obsolete – Prerequisite problems (dependents may be missing) – Missing behavior • Missing arguments – Obsolete – Missing behavior Copyright 1998, 1999 SilverMark, 122 Inc.
    • Code Complexity Metrics • Class coupling • Measured by implementors of sent messages • Class response – Measures complexity via potential number of messages sent in response to a message send • Cyclomatic complexity – Measures complexity via potential for branching • Depth of hierarchy • Public/private ratio – Low number may indicate complex class – High number may indicate high service rate • Refined (specialized) methods • Specialization index Copyright 1998, 1999 SilverMark, 123 Inc.
    • Code Complexity Metrics (Lorenz Complexity) • Complexity measurement based on weighted attributes of each method: – Number of platform function calls (default weight =5) – Number of assignments (default weight = 5) – Number of binary expressions (default weight = 2) – Number of keyword expressions (default weight = 3) – Number of nested expressions (default weight = 0.5) – Number of primitive calls (default weight = 7) – Number of temporary variables (default weight = 0.5) – Number of unary expressions (default weight = 1) – Default acceptable values [ 0...65 ] Copyright 1998, 1999 SilverMark, 124 Inc.
    • Envy/QA (OTI) • Suite of tools: – Code Critic – Code Metrics – Code Coverage – Code Publisher – Code Formatter Copyright 1998, 1999 SilverMark, 125 Inc.
    • Envy/QA - Code Critic • Includes many of the above static tests • “Developers can use Code Critic to: – standardize coding styles and improve the consistency of code among team members – quickly identify and correct common bugs – focus on potential areas for detailed code inspections • Project Managers can use Code Critic to: – identify design or implementation dependencies – obtain a quick summary of the state of a component • Release engineers can use Code Critic to: – gauge the overall quality of a component – identify potential packaging problems” [Envy QA manual] Copyright 1998, 1999 SilverMark, 126 Inc.
    • ...Envy/QA - Code Critic Copyright 1998, 1999 SilverMark, 127 Inc.
    • Envy/QA - Code Metrics • Includes the above complexity metrics • “Developers can use Code Metrics to: – isolate areas of the system that are highly coupled – identify and correct common problems – focus on potential areas for detailed code inspections – estimate the complexity of a component • Project managers can use Code Metrics to: – determine whether the project is following the estimated effort – improve project estimation skills – check whether guidelines are followed consistently • Release engineers can use Code Metrics to: – estimate component footprint – identify areas that need more testing” [Envy/QA manual] Copyright 1998, 1999 SilverMark, 128 Inc.
    • ... Envy/QA - Code Metrics Copyright 1998, 1999 SilverMark, 129 Inc.
    • ... Envy/QA - Code Metrics Includes above metrics, as well as various other quantitative metrics - classes, extended classes, applications, subapplications, prerequisites, dependents, instance methods, class method, component size etc... Copyright 1998, 1999 SilverMark, 130 Inc.
    • SQAT (MicroDoc) • Many of the same features as Envy/QA Code Critic, but at a lower price than the full Envy/QA • Some differences: – SQAT adds tests for VisualAge parts and connections – SQAT adds tests for direct references to classes, strings, globals – Whereas Envy/QA warns that a specializing method does not call superclass, SQAT warns when it either does and does not initialize super initialize <special stuff> Copyright 1998, 1999 SilverMark, 131 Inc.
    • SQAT Copyright 1998, 1999 SilverMark, 132 Inc.
    • SQAT - VisualAge Tests • Broken code hook – Selector not implemented • Inconsistent action selector – Missing selector or wrong number of attributes • Invalid subpart – Deleted or renamed • Invisible attribute type – Attribute part class not in prereq chain • Missing attribute accessors – Get or set selector missing • High number of connections • High number of subparts Copyright 1998, 1999 SilverMark, 133 Inc.
    • Refactoring Browser - SmallLint • Free tool from Ralph Johnson and John Brandt • Includes SmallLint, which embodies many of the above static Smalltalk tests and adds the following checks (among others): – Comparison precedence ordering • a | b = c or a | (b = c) – Uses True/False instead of true/false – Instance variable overridden by temp variable Copyright 1998, 1999 SilverMark, 134 Inc.
    • ...Refactoring Browser - SmallLint – Collection modified while iterating over it • Check for #remove within #do: – Possible three-element point • x@y + q@r – Returns Boolan and non-Boolean • Check for return on result of ifTrue: or ifFalse: – Method uses result of an #add: message – Nonblock arguments passed to ifTrue:/ifFalse: Copyright 1998, 1999 SilverMark, 135 Inc.
    • ...Refactoring Browser - SmallLint Copyright 1998, 1999 SilverMark, 136 Inc.
    • Dynamic Testing Tools • Advantages: – Tests systems as they execute – Can uncover state and timing dependent defects • Disadvantages: – Test creation requires some effort • Effort varies depending on the tool and the type of testing • Test cases need to change with the code under test Copyright 1998, 1999 SilverMark, 137 Inc.
    • Dynamic Testing Tools for Smalltalk • Ideal: – Automates test creation – Scalable to large systems – Test cases are reusable components – Integrated with development environment – Tests model and view objects – Tests exploit same language and architectural features as applications under test – Tests can be used by developers and testers with equal ease – Tests can be created and executed on the development or packaged runtime environments Copyright 1998, 1999 SilverMark, 138 Inc.
    • Dynamic Testing Tools • There are quite a few testing tools on the market that operate on applications developed for various operating systems • In independent tests, none of the tools that operate from outside of the Smalltalk image have proven themselves strongly applicable to testing Smalltalk applications because: – They do not test domain objects – They do not recognize emulated widgets – Their ability to gather metrics is limited – Their scripting languages are far from Smalltalk • Cannot take advantage of the power of objects Copyright 1998, 1999 SilverMark, 139 Inc.
    • S-Unit (Kent Beck) • Described in Simple Smalltalk Testing with Patterns by Kent Beck • Free download at ftp://ic.net/users/jeffries/TestingFramework/ • Basic object testing framework for developers (no UI) • Patterns: – Fixture - a place to hold instance of class under test – Test case - embodies stimuli to apply to class under test – Check - verify the response of a test case (post condition) • #should:, #shouldnt, #should:raise: – Test suite - aggregate of test cases • Excellent next step for developers who are only testing within workspaces • Gross test execution time automatically measured Copyright 1998, 1999 SilverMark, 140 Inc.
    • ... S-Unit TestSuite TestCase name selector testCases 1 #setUp #named: * #tearDown #addTestCase: #should: #run #shouldnt: #should:raise: #run #selector: TestResult #startTime MyTestCase #stopTime #testName #failures #setUp #errors #tearDown #testCases #someTest ((TestSuite named:’My Test') addTestCase:(MyTestCase selector:#someTest)) run Copyright 1998, 1999 SilverMark, 141 Inc.
    • OTF - An Object Testing Framework • Available from MCG Software at: http://www.mcgsoft.com/ • Emphasis on testing public interface of objects and “sequential nature of testing” • Vendor portable • Some APIs for common UI components – Return widget, close window, click, double-click • Proprietary scripting language with callouts to Smalltalk • GUIs to create, execute and view results text • OTF Plus+ adds planning1999 SilverMark, Copyright 1998, and reporting 142 Inc.
    • OTF - Components Reprinted by permission courtesy of MCG Software Copyright 1998, 1999 SilverMark, 143 Inc.
    • OTF - Terminology • Portfolios and Suites are structural objects for the aggregation of Tests. Suites are the unit of reuse between test developers and test users. Each Suite should test a class or component and should be ‘owned’ by a single individual. Suites are designed to be shared between Portfolios and shared between test users and developers. • Test Steps perform the lowest level of testing: the sending of a message to an object, often the Object Under Test (OUT), and using a Probe to compare the Result against a Target. • Control Steps perform test flow control functions such as conditional test execution and looping. The control flow can be moved between Tests and between Suites. • Probes provide the mechanism to determine if a Test Step passes or fails. The supplied Probes include identity, equality, kind of, print, nil, true, false, user decides, debug, exception and timer. – New probes may be created by subclassing OTProbe and overriding a few methods Copyright 1998, 1999 SilverMark, 144 Inc.
    • OTF - Sample Test 1. OrderedCollection new ...is kind of... OrderedCollection 2. PREV ...size... 0 3. PREV isEmpty ...is true 4. OrderedCollection new: 20 ...is kind of... OrderedCollection 5. TESTER toOUT 6. OUT size ...=... 0 7. OUT capacity ...=... 20 8. string1 := ‘This is a String’ 9. OUT add: string1 ...==... string1 10. OUT size ...=... 1 11. OUT at: 1 ...is kind of... String Copyright 1998, 1999 SilverMark, 145 Inc.
    • OTF - Portfolio/Suite Editor Copyright 1998, 1999 SilverMark, 146 Inc.
    • OTF - Test Editor Copyright 1998, 1999 SilverMark, 147 Inc.
    • OTF - Step Edit Copyright 1998, 1999 SilverMark, 148 Inc.
    • SilverMark’s Test Mentor • Available from http://www.silvermark.com • For VisualAge Smalltalk (coming soon to VisualWorks) • UI record and playback on all widgets • Generates tests as Smalltalk code under ENVY version control • Generates model tests from popular design tools – IBM’s UML Designer (Rational Rose coming soon) • Records and plays back Web Connection transactions • Results stored in ENVY • GUI-driven operation • Method coverage metrics • Fine-grained performance measurement • Statistical results analysis • Results Persistence Copyright 1998, 1999 SilverMark, 149 Inc.
    • Test Mentor - Terminology • Tests are composed of a hierarchy of three elements: – Suite - A class that embodies a set of related usage scenarios of some element of the system under test. A suite is composed of scenarios • Scenario - A particular usage or path through some element of the system under test. A scenario is composed of steps – Step - An action of arbitrary granularity upon some element of the system under test. A step may be implemented in a number of different ways depending on its required action Copyright 1998, 1999 SilverMark, 150 Inc.
    • Test Mentor - Step types • Class Method - Execute a class self-test • Instance Method - Execute a method in the scenario’s instance • Collection of steps - Execute a sequence of steps • Conditional collection of steps - Execution of a sequence of steps on a condition • File iteration - Iterate a sequence of steps over a test data file • Manual Intervention - Prompt for manual execution and feedback • Script - Execute Smalltalk test script • Suite - Execute a specified suite * • Scenario - Execute a specified scenario * • User Interface - Execute UI playback step • User Interface Recording - Execute sequence of UI playback steps • User Interface Verification - Check expected UI state • Unspecified - Placeholder • Workspace - Load, compile and execute a workspace • Web Connection transaction - Play back and verify a recorded Web Connection transaction * Items in bold reference reusable components Copyright 1998, 1999 SilverMark, 151 Inc.
    • Test Mentor - Editor Copyright 1998, 1999 SilverMark, 152 Inc.
    • Test Mentor - Results Copyright 1998, 1999 SilverMark, 153 Inc.
    • SilverMark’s Test Mentor - QA Edition • Similar to SilverMark’s Test Mentor, but targeted toward QA specialists who operate only in packaged runtime image – Does not require development image or VisualAge license – Does not require knowledge of Smalltalk • Simple scripting language • Can share tests created with Test Mentor – Testers can incorporate low-level Test Mentor domain object suites without knowledge of test implementation – Developers can run Test Mentor - QA Edition tests in development image to reproduce reported problems Copyright 1998, 1999 SilverMark, 154 Inc.
    • Test Mentor Integration Test Mentor QA Edition VA Smalltalk SmallCycle Packaged Image Shared Test Cases VA Smalltalk Web Connection Test Mentor Development Developer Image ENVY/QA UML Designer Rational Rose* * Coming soon Copyright 1998, 1999 SilverMark, 155 Inc.
    • Contents • Introduction • Why Test? • Economics of testing • When and Where to Apply Testing Effort • Testing in the Development Life Cycle • Testing Strategies • Test Case Design • Testing Products/Tools • Summary • References and Further Reading Copyright 1998, 1999 SilverMark, 156 Inc.
    • Summary • The risks of little or no testing are high - especially for business critical applications • The benefits of testing outweigh the costs, especially when the costs are mitigated through foresight, planning and applying the right tools Copyright 1998, 1999 SilverMark, 157 Inc.
    • Cautionary Notes • Testing is extremely easy to do poorly • Make sure your testing efforts are applied to the areas where they will have the most payoff: – “…we had a test suite that could, with high reliability, discover nothing important about the software we were testing. … Automation is a great idea. To make it a good investment, as well, the secret is to think about testing first and automation second.” [1] • Be especially cautious when introducing a new process or formalism – Give testing the intellectual rigor (planning) that it requires – Set realistic goals – Manage expectations Copyright 1998, 1999 SilverMark, 158 Inc.
    • Contents • Introduction • Why Test? • Economics of testing • When and Where to Apply Testing Effort • Testing in the Development Life Cycle • Testing Strategies • Test Case Design • Testing Products/Tools • Summary • References and Further Reading Copyright 1998, 1999 SilverMark, 159 Inc.
    • References and Further Reading [1] James Bach. Test Automation Snake Oil. http://www.stlabs.com/testnet/docs/snakeoil.htm 1996 [2] Kent Beck, Simple Smalltalk Testing with Patterns from ftp://ic.net/users/jeffries/TestingFramework/ [3] Boris Beizer. Software System Testing and Quality Assurance, Van Nostrand Reinhold Co. 1984 [4] Boris Beizer. comp.software.testing Frequently Asked Questions, http://www.faqs.org/faqs/software- eng/testing-faq/ [5] Robert V. Binder. Design for Testability on Object-Oriented Systems. CACM, 37(9):87-101 September 1994 [5.5] Bianco, Nicole A Business Case for QA and Testing Software Development:42-46 February, 1999 [6] CNET. 10 Great Bugs of History, http://www.cnet.com/Content/Features/Dlife/Bugs/ss05.html. 1997 [7] Donald G. Firesmith, Testing in Object-Oriented versus Procedural Environments, Systems Development Management, Supplement 34-70-50, Auerbach Publications, New York, New York, August/September 1995, pages 20. [8] Donald G. Firesmith, Pattern Language for Testing Object-Oriented Software, Object Magazine, Vol. 5, No. 8, SIGS Publications Inc., New York, New York, January 1996, pages 32-38. [9] Bill Hetzel. Software Testing, QED Information Sciences, Inc. 1984 [10] Paul C. Jorgenson and Carl Erickson. Object-Oriented Integration Testing. CACM, 37(9):30-38 September 1994 [11] Cen Kaner, Jack Falk, Hung Quoc. Testing Computer Software, International Thompson Computer Press 1993 [12] Brian Marick. The Craft of Software Testing, Prentice Hall. 1995 [13] John D. McGregor Testing Object-Oriented Systems Tutorial Notes In The Conference on Object- Copyright 1998, 1999 SilverMark, 160 Oriented Systems, Languages and Applications, October 1994 Inc.
    • …References and Further Reading [14] John D. McGregor and Timothy Korson. Integrating Object-Oriented Testing and Development Processes. CACM, 37(9):59-77 September 1994 [15] John D. McGregor and Douglas M. Dyer, Selecting Functional Test Cases for a Class, in the proceedings of the Eighth Annual Pacific Northwest Software Quality Conference, Pacific Agenda, Portland, Oregon, October 1993, pages 109-121. [16] John D. McGregor and A. Kare, PACT: An Architecture for Object-Oriented Component Testing, in the Proceedings of The Ninth International Software Quality Week in San Francisco, California, SR Institute, San Francisco, 22 May 1996. [17] Gail C. Murphy, Paul Townsend, and Pok Sze Wong. Experiences With Cluster and Class Testing. CACM, 37(9):39-47 September 1994 [18] Glenford Myers. The Art of Software Testing, John Wiley & Sons, Inc. 1979 [19] Jeffery Payne, Roger Alexander, Charles Hutchinson. Design-for-Testability for Object-Oriented Software. Object Magazine:34-43 July 1997. [20] Robert M.Poston. Automated testing from Object Models. CACM, 37(9):48-58 September 1994. [21] Suzanne Skublics, Edward J. Klimas, David A. Thomas. Smalltalk with Style. Prentice Hall. 1996 [22] Shel Siegel. Object Oriented Software Testing, a Hierarchical Approach. John Wiley & Sons, Inc. 1996 [23] S. Sridhar. Implementing Peer Code Reviews in Smalltalk. The Smalltalk Report, 1(9):3 July/August 1992. [24] Deb Stacey. Software Testing Techniques. University of Guelph 27-320 Software Engineering Lecture Notes http://hebb.cis.uoguelph.ca/~deb/27320/testing/testing.html 1995 Copyright 1998, 1999 SilverMark, 161 Inc.
    • Internet Resources • comp.lang.smalltalk • comp.software.testing • comp.software.testing FAQ: – http://www.rstcorp.com/c.s.t.faq.html – http://www.faqs.org/faqs/software-eng/testing-faq/ – ftp://rtfm.mit.edu/pub/usenet/news.answers/software- eng/testing-faq • Extreme programming roadmap • http://c2.com/cgi/wiki?ExtremeProgrammingRoadmap Copyright 1998, 1999 SilverMark, 162 Inc.
    • Vendor Contacts Product Vendor URL ENVY/QA OTI http://www.oti.com OTF MCG http://www.mcgsoft.com Software SQAT MicroDoc http://www.microdoc.de S-UNIT Kent Beck ftp://ic.net/users/jeffries/TestingFramework/ Test Mentor SilverMark http://www.silvermark.com SmallCycle Unity http://www.unity-software.com Software Systems Copyright 1998, 1999 SilverMark, 163 Inc.