In this webinar, To Automate or Not to Automate: 5 Things to Consider when Building your Test Automation Strategy, Sr. Data Scientist at Rainforest QA, Maciej Gryka, uncovers how to build a balanced test automation strategy to deliver stellar customer experiences.
Listen to the full webinar by following this link: https://bit.ly/2HsgITi
1. To Automate or Not to Automate
5 Things to Consider when Building Your Test Automation Strategy
Dr Maciej Gryka
2. Agenda
1. Introduction
2. Creating a testing strategy
3. The knowns (unit testing, CI/CD, usability, pentesting)
4. The fuzzies: functional testing
5. Guidance on how things break
6. Final Takeaways
3. About Me
Maciej Gryka
Senior Manager
Rainforest QA
Maciej analyzes data and leads the science team at RainforestQA.
He enjoys solving data-related problems while defining roadmaps,
leading teams and projects, and planning releases. He’s currently
developing machine learning algorithms and helping define
Rainforest’s automation roadmap.
Specialities: R&D, Machine Learning
4. Quality Assurance, automation and dogma
- Quality Assurance is means to an end.
- Very few best practices work equally well for every team.
- Developing a QA strategy for your specific case is key.
- Automation might fit into this picture, but there are important caveats.
5. Creating a QA Strategy
- Do you even need one?
- What is and isn’t a good strategy?
- Think about the trade-offs.
- To incorporate automation into your
strategy, you need to know what to expect
down the line.
X
6. Some things have simple answers
- You should have code reviews.
- You should have decent unit test coverage.
- You should use CI and execute your unit tests before
every deployment.
- You should not ship code without tests.
- Automating accessibility/usability/penetration testing is
probably not a great idea.
8. How much functional testing to automate
- When things get less clear is functional testing: while it’s possible to
automate it, it’s certainly not easy.
- In a way, functional testing is the essence: do things even work?
- It can be tricky to know how much to automate, because there are multiple
areas where you can spend your effort and they provide different return on
your investment depending on the scenario.
13. Automated tests often break for similar reasons
- If you wrote Selenium (or similar) tests before, you probably know this.
- Failures repeat, e.g. it’s tricky to robustly specify how to find an element.
- Developing this intuition takes a lot of effort and it differs depending on your
background.
- We need a systematic way to talk about such problem.
- Scientists to the rescue!
Why do Record/Replay Tests of Web Applications Break?, Hammoudi et al., ICST 2016
14. Research methodology
1. Assemble a collection of web applications.
2. For each application, start with an early version.
3. Develop automated tests for that application version.
4. Update the app, running the tests at each update, recording and fixing
breakage reasons.
5. Convert the results into a taxonomy of failures, sorted by importance.
Why do Record/Replay Tests of Web Applications Break?, Hammoudi et al., ICST 2016
15. Automated tests often break for similar reasons
1. Element locators changes
2. Value changes
3. Page reloading
4. User session times
5. Pop-up windows
Why do Record/Replay Tests of Web Applications Break?, Hammoudi et al., ICST 2016
17. Breakage reason 1: locator changes
1. You need to find interface elements to interact with and verify them.
2. This is done using “locators” such as element type, id, CSS class etc.
3. Such locators often change, even when the appearance stays the same.
4. What looks the same to a human, might be very different to a testing script.
5. No easy way around this, fixing takes effort.
19. Breakage reason 2: value changes
1. Verifying whether your application reacts appropriately to certain input and
produces the correct output: you need to assume certain values.
2. Your back-end will change (e.g. updated password requirements, adding a
new field to a form).
3. You will have to update your automation script accordingly.
21. Breakage reason 3: page reloads
1. All useful applications need to store some state. Very often, page reload
affects that state.
2. E.g. a reload might be necessary to update the UI or it might be critical for
the user to not reload until some requirement is met (did you ever partially fill
in a form only to lose progress by reloading by accident?).
3. The points at which reloads are necessary/dangerous change as your
developers work on the product.
4. The automation scripts need to be kept in sync, but it’s not always obvious.
23. Breakage reason 4: user session timing
1. In many cases, session timings are important: do you want to log out inactive
users after some time? Or remind someone that they’ve had items left in their
cart for a while?
2. Testing this is important too: affects security, conversion rates etc. and it’s
possible to automate.
3. As your implementation changes, your script will need to be updated as well.
25. Breakage reason 5: pop-ups
1. Absence or presence of pop-ups is another common breakage reason.
2. We’ve seen many examples of this ourselves at Rainforest: newsletters,
promotions, support integrations etc. all produce pop-ups.
3. While they are easy to ignore for automation scripts (less so for humans!),
they can be on a critical path and therefore in need of testing.
4. Updating their triggers means updating the automation.
27. Final Takeaways
1. Develop your own testing strategy.
2. Cover the bases (e.g. unit tests, CI).
3. When automating tests, mind the maintenance costs and common pitfalls.
28. Thank you!
If you have additional questions, feel free to
reach out to me at maciej@rainforestqa.com