Your SlideShare is downloading. ×
  • Like
Fusion Testing - Maximizing Software Test Execution
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.


Now you can save presentations on your phone or tablet

Available for both IPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Fusion Testing - Maximizing Software Test Execution


The Fusion Testing Techniques is a hybrid test methodology created to help test teams be more efficient in their test execution to maximize the amount of hours teams are actually executing code. This …

The Fusion Testing Techniques is a hybrid test methodology created to help test teams be more efficient in their test execution to maximize the amount of hours teams are actually executing code. This methodology has been refined through Agile implementation.

Published in Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads


Total Views
On SlideShare
From Embeds
Number of Embeds



Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

    No notes for slide
  • To provide information to the business about the risk of releasing a product to our customers. With this mission we can conclude that utilizing many different methods of testing will provide us with more of the data needed to make decisions. Our common goal should be empowering engineers to use their time to find all data possible and worry less about artifacts to present data. We need the engineers finding more data, through increased testing execution!
  • F ocus : your day with 15 minutes of thought, what are your goals for the day or test focus, write them down Create a Goal for the day Map out how to reach that goal through test execution Determine the structure of testing needed Detailed Automated Exploratory Identify the environment or resources that are needed Discuss your plan with other team members Write down your items to Focus your day Be prepared to adjust your goals based on your exploration U se : how is the system under test going to be used by your customers (all customers) Create test persona’s of your customers to help you emulate product usage Work with your Product Manager’s, BA’s, and other team members to gather how the system is expected to be used Identify non-common usage scenario’s and analyze the risk to operations Review message boards to see what your customers are saying about the product and it’s usage Talk to your support personnel and review the usage tickets that don’t typically make it to engineering teams Organize a session with support to listen to the calls Take all of this knowledge and apply within the test checklists and other test structure Scope : what scope of testing do you need to accomplish, detailed structured testing, exploratory models How detailed does your testing artifacts need to be Understand performance needs and scale of the testing needed Mapquest Driving Directions (Structured Testing) Detailed Step by Step Once you reach a problem it is hard to route around Limits the number of paths to your destination Treasure Map (Exploratory Testing) Loser guidelines Provides a goal and guidelines on how to reach the goal Not all variables are the same leading to different paths Guidance System (Direct to Automation) Automate the exact steps Need both the details and the goal Variables need to be defined and specifically tested I nitiate : plan less and execute more, get your hands into the product and initiate different sequences to find more data Only plan at a daily level and then start testing Drive testing in different ways based on the information you gathered from your users Work with the other people on your team to cover more options Execute unexpected tests Write automation to help in your testing Talk to engineers about testing principles and initiate up front quality Initiate a discussion with the developer to learn the limits of the code, shoulder surf and bring up tests that they might keep in mind while developing Learn more about unit testing, testing methodologies, automation and products O rganize : have a plan, be ready to deviate from the plan based on the reaction of the system Take 10 to 15 minutes to organize your day Define your test needs and prepare them to maximizes your exploration Keep your work space organized to improve efficiency Prepare for your exploration by making note of possible diversions in your exploration Interrupt Scenario’s Failures Different Paths to the Same Goal Plan if you are going to time box your efforts Figure out how you will deal with interruptions in the day Identify any automation that can be execute to change the system state Understand your work schedule for the day, including meetings, breaks, social time, lunch and test focus Prepare a method to isolate yourself to immerse yourself in your work and let your peers and manager know that you are doing that N ote : It is important to be able to recreate your steps during your test execution Use a notepad to take notes while testing whether you are doing structured or exploratory testing If possible use a keystroke recording tool Note the key paths to your test exploration and when you find something unexpected retrace your notes to reproduce the problem Instrument your automation to make it easy to reproduce steps either manually or automated use your daily notes to update test checklists Share your notes with other testers or developers to communicate your findings Use notes to identify different paths regardless of a successful experiment of a failed experiment Notes can be translated into defect reports after reproduction and narrowing down the steps to failure
  • Examples for each: Goals – Look to the Testing Types (Regression, Performance, Stability, Functional …) and decide what areas need to be covered for the project Methods – Determine the percentages of automation needed, structured tests needed and exploratory time needed Test Lists – see my blog for specific examples of test lists and the power that they can have Automate – Automation is one of the most misunderstood tools in the test arsenal (we often spend too much time automating and maintaining automation for the wrong things.) One of my automation engineers once spent 2 weeks on writing a set of automated tools for a piece of functionality that would only be around for 1 month focus instead on, performance and benchmark tests, but don’t automate just for the sake of automating, if there are tests that will only be run once and then are not valuable don’t spend time automating Document – our goal is to increase test execution, document plans, test results and metrics, however do not try to have low level details that require copious amounts of maintenance
  • Example from Mx Logic on the Timeline and how I guided the team to fully using Fusion. 5 Phases – 1 st – Introduced Structured Testing 2 nd – Introduced Exploratory Testing (75% Structured, 25% Exploratory, 10 to 25% automation coverage) 3 rd – Introduced Automated Testing (50% Exploratory, 50% Structured, 50% automation coverage) 4 th – Combined Structured , Exploratory and Automation, introduced Fusion to Offshore Team, (75% Exploratory, 25% Structured, stopped maintaining a Test Management Tool, first introduction of Agile, 50% automation coverage) 5 th – Full Fusion implementation (test lists implemented, metrics fully supported, 90% Exploratory, 10% Structured, Direct to Automation, Offshore team doing exploratory and direct to automation, test tools covering traceability.) Currently in 5 th year of implementation – Defect Removal Efficiency increased from 30% to 95% over those 5 years Test team grew from 2 testers to 18 testers, while # of products supported grew from 1 to 6 Guide your team through change Organizations often need to flip teams and processes to get full benefit It is easier to remain entrenched in proven philosophies Relying on tried and tested artifacts can limit improvement Change Message creation Articulate why changing is necessary Growing complexity to testing requires more test execution Reduce inefficiencies to improve data collection Mentor your teams to help them understand Fusion The key to implementation is communicating Challenge the teams to increase test execution time Continuously inspect and adapt to improve implementation Train your engineering teams in different principles Static Test Techniques Structured Test Techniques Exploratory Test Techniques Automation Techniques Metrics need to be implemented Identify the metrics that can skew quality data (test case coverage, pass fail ratio’s) Track the amount of time your team spends on test artifact maintenance Track the amount of issues your customers are finding after release
  • Use one of our UAT’s – The Tent and Party with the fun plan – had over 50 people do testing, found 5 showstoppers, 10 critical issues, and 10 usability issues
  • Add funny story, that relates metrics and life, maybe about creating this presentation. No matter what methodology you implement in testing the interest will always lie in the data that can be presented to make business decisions from. How can you accurately assess quality when the combinations of test exceed the particles of the universe? Answer: You can’t, what you can do is present what you have tested, what you did not test, support the risk assessment used to order what was tested and what was not. Show the following on your report: # of Test Ideas Executed based on Risk Assessment # of Test Cases executed with pass fail ratios and a qualifier to the extent of coverage that those test represent # of Automation Test Pass & Fail Ratio – including how often those tests were run with their diminishing returns ratio Performance/Benchmark comparisons by build/iteration/release Engineering Team Quality Satisfaction Rating – an important measure is how the development and test teams feel about the quality and stability of the product, their gut feelings are often more indicative of risk and quality than any other measure Open bug count, not as overall percentage of bugs but as a data point on potential issues that your customers may face and report For detailed instructions on these metrics see Find measures that represent quality & provide the most data Focus on customer impact with your metrics and look to improve those statistics # of Support Issues Raised per release. # of Patches by Sev per release % of Issues found by customers Automation Metrics % Pass/Fail Performance metrics Diminishing Returns New Coverage
  • In most instances traceability matrixes or projects take up a majority of the document maintenance time. So how can you avoid this. First I am a supporter of traceability when it is scoped appropriately. It is very time consuming to maintain a detailed traceability matrix that maps every requirement to every test case through to every defect report. Instead look for opportunities to reduce the detail level Feature versus Functionality (often traceability goes beyond the feature level to the functionality of a feature so that any minor change to the functionality impacts the traceability.) Define traceability at the Feature level and worry about having test list items and automation at the functionality level. Test case level traceability (instead of trying to detail every test case, ensure that there is a test checklist for each feature and map your traceability at the checklist level, this way you do not need to update your traceability matrix with each change to a test case, and since there are fewer detailed test cases to map to you are mapping more to a test idea versus an actual test Defects – traceability of every defect is a manually intensive undertaking, instead of updating a matrix with the bug I design a bug system that is easy to track issues at a feature level. This way you can report traceability on a defect grouping level rather than a single issue level. For instance feature X has had a total of 20 defects logged with currently 2 severity 3 bugs open. Remembering that defect metrics are very misleading in the overall measurement of quality, they are just data points of what has been found not what has not been found. Automation – Again the devil is in the details with traceability on Automation, we absolutely need to know what is automated and what tests that automation executes as well as what the expectations are for the automation outcome, however mapping each automated test to each requirement, user story, user acceptance test etc can be a very large time expenditure that reduces the team’s efficiency on exploring and finding data, look for opportunities to tie your automation traceability into your test case traceability and the Feature or test checklist level rather than the individual test case level.
  • Time to release went from 18 months to 6 months to 3 months and from a single product release to 6 products releasing each quarter
  • Talk about how changed the mentality of testing in Vietnam, and how they are becoming agile by using Fusion.


  • 1. Fusion Testing Maximizing Test Execution By: James Tischart
  • 2. About Me
    • Director - Product Delivery SaaS at McAfee
    • 15+ years experience in testing and engineering
    • Multiple certifications in Agile and Testing
    • Passionate about Testing as an eng discipline, a science and an art
    • Continue to challenge the status quo, test new approaches and always strive to improve
    • Support the legitimacy of the testing professional in the broader engineering world
  • 3. What is Fusion Testing?
    • An occurrence that involves the production of a union
      • Organization of structured testing
      • Freedom of exploratory testing
      • Rigors of automated testing
      • Combined into one test methodology
    • To achieve
      • Maximize code execution
      • Increased test coverage
      • Reduced test artifact and documentation
      • Higher quality for users
      • Improved data for organization
  • 4. Fusion F ocus – start your day with 15 minutes of thought U sage – how will your users work with the system S cope – decide on the scope of everything I nitiate – just go and explore O rganize – create a plan & be ready to deviate from it N ote – keep track of your exploration to retrace steps
  • 5. Fusion Testing Guidance
    • Maximize test execution with Fusion by:
    • Identify Goals for Testing
    • Choose the right mix of methodologies
    • Utilize Tests Lists to guide exploration
    • Automate the Right Things
    • Document at the Right Detail
  • 6. Fusion Implementation
    • Guide your team through the change
    • Create a Change Message
    • Mentor your team members
    • Train the team in testing and engineering
    • Create Metrics to measure success & failure
  • 7. Fusion & Planning
    • Release Test Plan
      • Set the goals for the overall release
      • Plan the test environment
      • Organize the metrics and performance
      • Plan a budget, resources and a time frame
      • Plan your automation and tool set
      • Plan at a high level
    • Iterative Test Plan
      • Very effective in Agile implementations
      • Break up the testing into iterations
      • Low level details for the time frame
        • Keep to two weeks or less
        • Detail the testing to accomplish
        • Detail the performance tests
        • List what will not be tested
        • Identify risks to the plan
        • Plan what Regression Testing
        • Identify what test-lists need to be written
    • Daily Plan
      • Individually created each morning
      • Provides guidance and goals for the day
      • Details the execution plan
    • Test Lists
      • A grouping of test ideas
      • No specifics in the test list
      • Priority, variations & guide for exploration
  • 8. The Power of Many
    • Utilize different users of your system to bring fresh perspectives
    • - Support personnel - Sales staff
    • - Developers - Training
    • - User groups - Product Management
    • Help guide the testing by providing checklists, environments and goals
    • Don’t give exact details or specific steps this minimizes innovation
    • Make the event fun and you will have many people continuing to help
    • Have testing experts available to help the volunteers for problems
  • 9. Identify Defect Traits
    • For new products use historical traits from similar projects
    • Analyze Severity 1 Customer reported incidents
    • Determine the trends to those defects in coding & testing
    • Constrain the review to a calendar year of defects
    • Next analyze Sev 1 internally found and fixed issues and determine the traits
      • Inform the developers & testers of the traits
    • Create test lists or test ideas to cover these traits
    • Repeat the process with each severity level you use
  • 10. Fusion & Automation
    • Determine your layers for automation:
    • Server
      • Databases, server processes
      • Machine interaction points
      • Process interaction with other layers
    • Middleware/API layers
    • User Interface
    • Optimize your test artifacts
      • Don’t duplicate test-cases
      • Use automation for traceability.
  • 11. Test Results & Metrics There will always be interest in the test data no matter the test structure. Q: How can you accurately assess quality when the testing combinations exceed the particles of the universe? A: You can’t! We need to present what was tested, what has not been tested and support the assessment used to make this prioritization.
    • # of Test Ideas Executed based on Priority
    • # of Test Cases passed/failed versus total coverage that the tests represent
    • # of Automated Tests Passed/Failed by # of executed times
    • Performance/Benchmark comparisons by build/iteration/release
    • Team Quality Satisfaction Rating – get the gut feeling of the team
    • Open defects to highlight potential issues that your customers may find
    • Go to for more details on these metrics
    Since you can’t test everything, here are some ideas of results to report:
  • 12. Traceability Matrices Positives Negatives Can trace tests to requirement coverage Time consuming to create and maintain Displays what has been executed Often out of date and misleading Shows relationship between tests and features Duplicates information from test cases and requirements Provides defect traceability to features Dedication to frequent updates needed Do Don’t Tie your test lists to requirements or stories Trace to the test case or test idea Constrain your traceability to the feature level Try to trace to the functional level Build traceability into the automated tests Duplicate information in many documents Map automation to test lists or test specs Over complicate maintenance of multiple sources
  • 13. Implementation Challenges
  • 14. Challenge: Management
    • Identify the Challenges:
    • Reliance on historical metrics
    • Understand the current processes and practices
    • Decision-making timelines
    • Need traceability to feel confident of data
    • Respond to the Challenges:
    • Provide better metrics
    • Show how new process improves quality
    • Prove how decision can be made faster
    • Review traceability needs and support them
  • 15. Challenge: PMO
    • Identify the Challenges:
    • Reliance on historical metrics
    • Struggle with the details of testing
    • Need predictability for costs and schedules
    • Require improved time to market
    • Respond to the Challenges:
    • Debunk misleading metrics
    • Show how more testing provides better data
    • Prove reduced costs with efficiency gains
    • Track lost opportunity cost on test artifacts
  • 16. Challenge: Engineering
    • Identify the Challenges:
    • Takes time away from coding
    • Testing can be tedious
    • Not their specialization
    • Rely on a serial approach to testing
    • Respond to the Challenges:
    • Test first design increases new coding time
    • Automated & exploratory tests are less tedious
    • Understanding testing improves code writing
    • Less defects will be logged with up-front tests
  • 17. Challenge: Regulation
    • Identify the Challenges:
    • Documentation Requirements
    • Formal or Standards Approval
    • Full traceability
    • Rigorous Automation & detailed test results
    • Respond to the Challenges:
    • Lean towards more structured testing
    • Use exploratory but document results
    • Plan for and work in shorter iterations
    • Automate more, document more
  • 18. Challenge: Outsourcing
    • Identify the Challenges:
    • Require more instruction and detail
    • Lack expertise in the product and markets
    • Multi-cultural communication
    • Formal process & document to support work
    • Respond to the Challenges:
    • Phase in Fusion and train on exploratory
    • Provide more details in documentation
    • Implement multiple communication paths
    • Support and train new processes
  • 19. Ensuring Success
    • Implement in stages
    • Measure the impact to customers (before & after)
    • Collect metrics for a project for test execution time before implementation
    • Introduce Fusion to cover about 25% of testing
    • Increase Fusion usage to 50% to further improve execution time
    • Continue to phase in until 75 to 90% of testing is Fusion
    • Monitor
    • Increases or decreases in execution time
    • Review documentation size closely reviewing the test artifacts
    • Measure the amount of direct automation
    • Review with the team the amount of time spent on maintenance
    • Continuously try to improve exploration time and increase efficiency
    • Adjust based on the needs of the project
    • One project may require more precise documentation
    • Another project may require less structure
    • Identify these requirements at the start of projects and adjust
    • Reduce the amount of time spent on artifact management
    • Inform people of the time impacts of data collection
  • 20. Five Keys to Fusion
    • Detail – the right level of detail at the right time
    • Planning – consistently plan, execute and adjust
    • Automate - spend time on specifics in automation
    • Report – measure what truly represents the team and customer
    • POM – get more people involved to expand the breadth of test
  • 21. Conclusion