Testing and Mocking Object - The Art of Mocking.
Upcoming SlideShare
Loading in...5
×
 

Testing and Mocking Object - The Art of Mocking.

on

  • 3,237 views

The Art Of Mocking with EasyMock in Java.

The Art Of Mocking with EasyMock in Java.

Statistics

Views

Total Views
3,237
Views on SlideShare
3,236
Embed Views
1

Actions

Likes
1
Downloads
110
Comments
0

1 Embed 1

https://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Equivalence Partitioning: In this method the input domain data is divided into different equivalence data classes. This method is typically used to reduce the total number of test case s to a finite set of testable test cases, still covering maximum requirements. Boundary Analysis: More application errors occur at the boundaries of input domain. ‘Boundary value analysis’ testing technique is used to identify errors at boundaries rather than finding those exist in center of input domain. Error guessing: Error Guessing comes with experience with the technology and the project. Error Guessing is the art of guessing where errors can be hidden. There are no specific tools and techniques for this, but you can write test cases depending on the situation: Either when reading the functional documents or when you are testing and find an error that you have not documented. Statement Coverage: Code coverage with the help test method execution ( tools like Emma, Clover), This metric reports whether each executable statement is encountered. Decision Coverage: This metric reports whether boolean expressions tested in control structures (such as the if-statement and while-statement) evaluated to both true and false. The entire boolean expression is considered one true-or-false predicate regardless of whether it contains logical-and or logical-or operators. Additionally, this metric includes coverage of switch-statement cases, exception handlers, and interrupt handlers. Condition Coverage: Condition coverage reports the true or false outcome of each boolean sub-expression, separated by logical-and and logical-or if they occur. Condition coverage measures the sub-expressions independently of each other. This metric is similar to decision coverage but has better sensitivity to the control flow. Decision/Condition Coverage: Condition/Decision Coverage is a hybrid metric composed by the union of condition coverage and decision coverage . Condition/decision coverage is a structural coverage criterion requiring that each condition within a decision is shown by execution to independently and correctly affect the outcome of the decision. Multiple Condition Coverage: Multiple condition coverage reports whether every possible combination of boolean sub-expressions occurs. As with condition coverage , the sub-expressions are separated by logical-and and logical-or, when present. The test cases required for full multiple condition coverage of a condition are given by the logical operator truth table for the condition.
  • Why engage in Object Mocking? The real object has behaviour that is hard to cause or is non-deterministic. For example you need to make sure the object you are testing handles an exception thrown from a method invoked on another object. Sometimes it is impossible to force the exception to be thrown on the real object, but an exception can be easily thrown from a mock object. The real object is difficult to set up. Often you must manage the state of an object before using it in a test. This can be difficult if not impossible accomplish. Mock objects can make it much easier to setup state especially when testing objects that need to work with a database or network. The real object has (or is) a User Interface. Interaction with a GUI can be especially troublesome without resorting to using something like the java.awt.Robot class. Interacting with a mock object however can be much simpler. Test needs to query the object, but queries are not available. For example you need to determine if a “callback” method was invoked. Since the mock object verifies that each method was invoked as specified we can know that the “callback” was invoked. If you are engaging in Test-First development the real object may not exist. Using a mock object can aid in interface discovery, a mock object serves as a hypothesis of how an object should act.
  • // Dependency Inversion Principle - Bad example class Worker { public void work() { // ....working } } class Manager { Worker m_worker; public void setWorker(Worker w) { m_worker=w; } public void manage() { m_worker.work(); } } class SuperWorker { public void work() { //.... working much more } } // Dependency Inversion Principle - Good example interface IWorker { public void work(); } class Worker implements IWorker{ public void work() { // ....working } } class SuperWorker implements IWorker{ public void work() { //.... working much more } } class Manager { IWorker m_worker; public void setWorker(IWorker w) { m_worker=w; } public void manage() { m_worker.work(); } }
  • So, do not do new A().b.method1() in your class X, just delegate the responsibility to b to call . Just new A().callMethod1OfB() is enough and in callMethod1OFB() use B object and call the method1().
  • The SingleResponsibilityPrinciple: In object-oriented programming , the single responsibility principle states that every object should have a single responsibility, and that all its services should be narrowly aligned with that responsibility. The Open Closed Principle: In object-oriented programming , the open/closed principle states " software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification "; [1] that is, such an entity can allow its behaviour to be modified without altering its source code . The LiskovSubstitutionPrinciple : Liskov's notion of "subtype" is based on the notion of substitutability; that is, if S is a subtype of T, then objects of type T in a program may be replaced with objects of type S without altering any of the desirable properties of that program The Dependency Inversion Principle (DIP) is sometimes regarded as a synonym for inversion of control . High level modules should not depend upon low level modules. Both should depend upon abstractions. Abstractions should not depend upon details. Details should depend upon abstractions. The Interface Segregation Principle: The dependency of one class to another one should depend on the smallest possible interface.

Testing and Mocking Object - The Art of Mocking. Testing and Mocking Object - The Art of Mocking. Presentation Transcript

  • Testing Overview and Mocking Objects Deepak Singhvi Feb, 04 2009
  • Goal of Presentation
    • Show how “Mocking Objects” can greatly improve unit testing by facilitating tests that are easier to write and less dependant upon objects outside the test domain
    • The ultimate goal - Quality
  • Agenda
    • Software Quality
    • Testability and the problems in it
    • V model
    • Terminologies
      • Error, Fault, Failure
    • Test Methodologies
      • Functional
      • Behavioral
    • Type of testing
      • UT, ST, IT, Alpha, Beta, etc
    • Mocking Objects - Why
    • Mocking using EasyMock and examples
    • Q & A
  • Software Quality QUALITY Formal Technical Reviews Software Engineering Methods Standards and Procedures SQA Measurements Testing
  • Testability IEEE 610.12 Testability. (1) Degree to which a system or component facilitates the establishment of a test criteria and the performance of tests to determine whether those criteria have been met. ... 
  • Testability
    • Testability has two facets:
      • controllability and
      • observability.
    • To test a component we must be able to:
      • control the input (and internal state) and
      • observe its outputs .
    • However, there are many obstacles to controllability and observability:
      • A component under test is almost always embedded in another system.
      • A component is embedded in another system and we wants to test that another system.
  • Why to do testing
    • Tests Keep you out of the (time hungry) debugger!
    • Tests Reduce Bugs in New Features
    • Tests Reduce Bugs in Existing Features
    • Tests Reduce the Cost of Change
    • Tests Improve Design
    • Tests Allow Refactoring
    • Tests Constrain Features
    • Testing Is Fun
    • Testing Forces You to Slow Down and Think
    • Testing Makes Development Faster
    • Tests Reduce Fear
  • Who Tests the Software? developer independent tester Understands the system but, will test "gently" and, is driven by "delivery" Must learn about the system, but, will attempt to break it and, is driven by quality
  • The V-model of development
  • Terminology
    • Error
      • Represents mistakes made by people
    • Fault
      • Is result of error. May be categorized as
        • Fault of Commission – we enter something into representation that is incorrect
        • Fault of Omission – Designer can make error of omission, the resulting fault is that something is missing that should have been present in the representation
    • Failure
      • Occurs when fault executes.
    • Incident
      • Behavior of fault. An incident is the symptom(s) associated with a failure that alerts user to the occurrence of a failure
    • Test case
      • Associated with program behavior. It carries set of input and list of expected output
    • Verification
      • Process of determining whether output of one phase of development conforms to its previous phase.
    • Validation
      • Process of determining whether a fully developed system conforms to its SRS document
  • A Testing Life Cycle Requirement Specs Design Coding Testing Fault Resolution Fault Isolation Fault Classification Error Fault Fault Fault Error Error incident Fix
  • Classification of Test
    • There are two levels of classification
      • One distinguishes at granularity level
        • Unit level
        • System level
        • Integration level
      • Other classification is based on methodologies
        • Black box (Functional) Testing
        • White box (Structural) Testing
        • Black-box and white-box are test design methods.
        • Black-box test design treats the system as a "black-box", so it doesn't explicitly
        • use knowledge of the internal structure. Black-box test design is usually
        • described as focusing on testing functional requirements. Synonyms for black-
        • box include: behavioral, functional, opaque-box, and closed-box.
        • White-box test design allows one to peek inside the "box", and it focuses
        • specifically on using internal knowledge of the software to guide the selection of
        • test data. Synonyms for white-box include: structural, glass-box and clear-box.
  • Relationship – program behaviors Program Behaviors Specified (expected) Behavior Programmed (observed) Behavior Fault Of Omission Fault Of Commission Correct portion
  • Test methodologies
    • Functional (Black box) inspects specified behavior
      • Equivalence partitioning
      • Boundary analysis
      • Error guessing
    • Structural (White box) inspects programmed behavior
      • Statement coverage
      • Decision coverage
      • Condition coverage
      • Decision/condition coverage
      • Multiple condition coverage
  • When to use what
    • Few set of guidelines available
    • A logical approach could be
      • Prepare functional test cases as part of specification. However they could be used only after unit and/or system is available.
      • Preparation of Structural test cases could be part of implementation/code phase.
      • Unit, Integration and System testing are performed in order.
  • More types of Testing
    • Unit Testing : The most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses. Integration Testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. Functional Testing : Black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. System Testing : Black-box type testing that is based on overall requirements specifications; covers all combined parts of a system. Sanity Testing : Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
  • Still more types of testing
    • Performance Testing : This term is often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans. Usability Testing : Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers. Installation/Uninstallation Testing : Testing of full, partial, or upgrade install/uninstall processes. Security Testing : Testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques. Compatability Testing : Testing how well software performs in a particular hardware/software/operating system/network/etc. environment. Ad-hoc Testing : Testing the application in a random manner. Alpha Testing : Testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers. Beta Testing : Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
  • Testing Walkthroughs Code Reviews Static Dynamic Inspections White Box Black Box Unit Testing
  • Black Box System Testing Integration Testing UAT Testing Alpha Testing Beta Testing Functional Testing Usability Testing Non Functional Testing Performance Testing
  • Functional Testing System Testing Smoke Testing Regression Testing Sanity Testing Adhoc Testing Retesting Gorilla Testing Negative Testing
  • Non Functional Testing System Testing Load Testing Volume Testing Stress testing Performance Testing
  • Testing Steps
  • Art of Mocking Objects
  • What are the Problems of Software Testing?
    • Time is limited
    • Applications are complex
    • Requirements are fluid
  • Few Definitions
    • Test Fixture/ Test Class – A class with some unit tests in it.
    • Test (or Test Method) – a test implemented in a Test Fixture
    • Test Suite - A set of test grouped together
    • Test Harness/ Runner – The tool that actually executes the tests.
  • Mock Objects
    • The objective of unit testing is to exercise just one method at a time.
    • But what happens when the method depends on other hard-to-control elements, such as:
      • network or
      • database
    • Method Controllability is threatened by these hard- to-control elements.
  • Why engage in Object Mocking?
    • The real object has behavior that is hard to cause or is non-deterministic
    • The real object is difficult to set up
    • The real object is slow
    • The real object has (or is ) a UI
    • Test needs to query the object but queries are not available
    • Real objects do not exist
  • Mock Objects - Definition
    • Using mock objects we can get around these problemes.
    • A mock Object is a Test Pattern which proposes a fake implementation of objects hard to control.
    • Its purpose is to simulate the real objects strictly for testing.
    • They have the same interface of the real objects.
  • The importance of IoC
    • Inversion of Control (aka DIP – Dependency Inversion Prinicipal)
      • A. High-level modules should not depend on low-level modules. Both should depend on abstractions.
      • B. Abstractions should not depend on details. Details should depend on abstractions.
    OOD principle
  • Inversion Of Control
    • Inversion Of Control, Dependency Injection, The Hollywood Principal etc.
    • In stead of instantiating concrete class references in your class, depend on an abstraction and allow your concrete dependencies to be given to you.
    OOD principle
  • Concrete Class Dependency
  • Allow dependency to be passed in
  • Some mock principals to follow
    • Mocking Interfaces and Classes outside Your Code
      • In general – do not mock code you do not own. For example – do not mock Active Directory or LDAP ( Lightweight Directory Access Protocol ), in stead - create your own interface to wrap the interaction with the external API classes. Like:
  • Rules of thumb
    • If you cannot test code – change it so you can.
    • Test first!
    • Only ONE concrete class per test – MOCK THE REST.
  • How many mocks????
    • How Many Mock Objects in any Given Test?
    • There should never be more than 2-3 mock objects involved in any single unit test.  I wouldn’t make a hard and fast rule on the limit, but anything more than 2 or 3 should probably make you question the design.  The class being tested may have too many responsibilities or the method might be too large.  Look for ways to move some of the responsibilities to a new class or method.  Excessive mock calls can often be a sign of poor encapsulation. 
    • Only Mock your Nearest Neighbor
    • Ideally you only want to mock the dependencies of the class being tested, not the dependencies of the dependencies.  From hurtful experience, deviating from this practice will create unit test code that is very tightly coupled to the internal implementation of a class’s dependencies. 
  • The law of Demeter (LoD)
    • The Law of Demeter (LoD) is a simple style rule for designing object-oriented systems. "Only talk to your friends" is the motto.
    • Each unit should only use a limited set of other units: only units “closely” related to the current unit.
    • “ Each unit should only talk to its friends.” “Don’t talk to strangers.”
    • Main Motivation: Control information overload. We can only keep a limited set of items in short-term memory.
    • Too many mocks and mocking past your immediate neighbors are due to violating this prinicpal.
  • Law of Demeter FRIENDS
  • “closely related”
  • Violations: Dataflow Diagram A B C 1:b 2:c P Q 3:p() 4:q() foo() bar() m
  • OO Following of LoD A B C 1:b c P Q 3:p() q() foo() bar() m 2:foo2() 4:bar2() foo2 bar2
  • Testing is easy in isolation
  • Testing is harder with dependencies …
  • … so remove the dependencies (for developer testing)
  • Examples
    • To get a Mock Object, we need to
    • 1) Create a Mock Object for the interface we would like to simulate,
    • 2) Record the expected behavior,
    • 3) And switch the Mock Object to replay state.
    • EasyMock uses a record/replay metaphor for setting expectations. For each object you wish to mock you create a mock object. To indicate an expectation you call the method, with the arguments you expect on the mock. Once you've finished setting expectations you call replay - at which point the mock finishes the recording and is ready to respond to the primary object. Once done you call verify.
    • testRemoveNonExistingDocument :
      • After activation in step 3, mock is a Mock Object for the Collaborator interface that expects no calls. This means that if we change our ClassUnderTest to call any of the interface's methods, the Mock Object will throw an AssertionError:
    • Verifying Behavior :There is one error that we have not handled so far: If we specify behavior, we would like to verify that it is actually used. The current test would pass if no method on the Mock Object is called. To verify that the specified behavior has been used, we have to call verify(mock):
  • Expecting an Explicit Number of Calls
    • Up to now, our test has only considered a single method call. The next test should check whether the addition of an already existing document leads to a call to mock.documentChanged() with the appropriate argument. To be sure, we check this three times
    • To avoid the repetition of mock.documentChanged("Document"), EasyMock provides a shortcut. We may specify the call count with the method times(int times) on the object returned by expectLastCall(). The code then looks like: expectLastCall().times(3);
  • Specifying Return Values
    • For specifying return values, we wrap the expected call in expect(T value) and specify the return value with the method andReturn(Object returnValue) on the object returned by expect(T value).
    • As an example, we check the workflow for document removal. If ClassUnderTest gets a call for document removal, it asks all collaborators for their vote for removal with calls to byte voteForRemoval(String title) value. Positive return values are a vote for removal. If the sum of all values is positive, the document is removed and documentRemoved(String title) is called on all collaborators:
  • Relaxing Call Counts
    • To relax the expected call counts, there are additional methods that may be used instead of times(int count):
    • times(int min, int max)
      • to expect between min and max calls,
    • atLeastOnce()
      • to expect at least one call, and
    • anyTimes()
      • to expected an unrestricted number of calls.
    • If no call count is specified, one call is expected. If we would like to state this explicitely, once() or times(1) may be used.
  • Flexible Expectations with Argument Matchers
    • If you would like to use matchers in a call, you have to specify matchers for all arguments of the method call.
    • There are a couple of predefined argument matchers available.
    • eq(X value)
      • Matches if the actual value is equals the expected value. Available for all primitive types and for objects.
    • anyBoolean(), anyByte(), anyChar(), anyDouble(), anyFloat(), anyInt(), anyLong(), anyObject(), anyShort()
      • Matches any value. Available for all primitive types and for objects.
    • eq(X value, X delta)
      • Matches if the actual value is equal to the given value allowing the given delta. Available for float and double.
    • aryEq(X value)
      • Matches if the actual value is equal to the given value according to Arrays.equals(). Available for primitive and object arrays.
    • isNull()
      • Matches if the actual value is null. Available for objects.
    • notNull()
      • Matches if the actual value is not null. Available for objects.
    • same(X value)
      • Matches if the actual value is the same as the given value. Available for objects.
  • Machers contd.
    • isA(Class clazz)
      • Matches if the actual value is an instance of the given class, or if it is in instance of a class that extends or implements the given class. Null always return false. Available for objects.
    • lt(X value), leq(X value), geq(X value), gt(X value)
      • Matches if the actual value is less/less or equal/greater or equal/greater than the given value. Available for all numeric primitive types and Comparable.
    • startsWith(String prefix), contains(String substring), endsWith(String suffix)
      • Matches if the actual value starts with/contains/ends with the given value. Available for Strings.
    • matches(String regex), find(String regex)
      • Matches if the actual value/a substring of the actual value matches the given regular expression. Available for Strings.
    • and(X first, X second)
      • Matches if the matchers used in first and second both match. Available for all primitive types and for objects.
    • or(X first, X second)
      • Matches if one of the matchers used in first and second match. Available for all primitive types and for objects.
    • not(X value)
      • Matches if the matcher used in value does not match.
    • cmpEq(X value)
      • Matches if the actual value is equals according to Comparable.compareTo(X o). Available for all numeric primitive types and Comparable.
    • cmp(X value, Comparator<X> comparator, LogicalOperator operator)
      • Matches if comparator.compare(actual, value) operator 0 where the operator is <,<=,>,>= or ==. Available for objects.
    • capture(Capture<T> capture)
      • Matches any value but captures it in the Capture parameter for later access. You can do and(someMatcher(...), capture(c)) to capture a parameter from a specific call to the method.
  • Limitations of Mocks
    • Can only mock interfaces and virtual members (generally)
  • Why not JMock
    • Too Stringy – Intelli-J, Eclipse does not like to refactor strings.
  • Why not NMock
    • Same thing – stringy!!!
  • Mocks to use
    • Rhino.Mocks (.net)
      • http://www.ayende.com/projects/rhino-mocks.aspx
    • EasyMock (java)
      • http://www.easymock.org/
      • NOT STRINGY!!!
  • Thank You [email_address]
  • Object oriented design principle
    • There are five principles of class design.
    • * (SRP) The SingleResponsibilityPrinciple
    • * (OCP) The OpenClosedPrinciple
    • * (LSP) The LiskovSubstitutionPrinciple
    • * (DIP) The DependencyInversionPrinciple
    • * (ISP) The InterfaceSegregationPrinciple
    • There are three principles of package cohesion
    • * (REP) The ReuseReleaseEquivalencePrinciple
    • * (CCP) The CommonClosurePrinciple
    • * (CRP) The CommonReusePrinciple
    • There are three principles of package coupling
    • * (ADP) The AcyclicDependenciesPrinciple
    • * (SDP) The StableDependenciesPrinciple
    • * (SAP) The StableAbstractionsPrinciple
    Back