Quest for an adequate
autotest
...Coverage?
A. Pushkarev 2018
Who am I? Alex Pushkarev (Саша)
• ~ 11 years in IT
• Software Engineer [In test]
• Agile fan, XP practitioner
• Context-driven test school
• Test automation tech lead at WorkFusion
WorkFusion
WorkFusion is a software robots company. We use AI to
help large companies do with technology what they did
before with people.
• Smart Process Automation
• Chatbots
• SmartCrowd
• RPA Express
WorkFusion specifics that not visible outside
• Complex domain with lots of innovations
(we have nobody to copy or replicate)
• Market is new and not well formed, yet with
extremely high competition already
(speed – is everything)
• Real world and restricted resources
(we can't hire +500 people and "just do it")
Meet Control Tower
• An orchestrator between different small
applications and modules
• Oldest application with lots of legacy code
• Layered architecture with specifics
• Most of the time deployed "On-premise"
with little or no control from our side
Regression Testing at Platform
• Takes up to 10 man/weeks
• Only 30 percent automated
• TestRail as Test Management tool
Test Automation at Platform
• Layered ("Tiered") framework
in a separate repository
• Driven by dedicated "test developer"
specialists/team
• UI test automation focused
• Sophisticated report analysis/rerun tool
hand-made and clumsy
Test Automation coverage
By lines of code:
• unit and integration tests - < 30%
• UI tests - < 60%
By test cases
• unit and integration tests - Unknown
• UI tests - > 35%
Test automation effectiveness
• ~2500 autotests
• 4 hours for test to run
• ~20 tests fail each run
• Lots of dirty hacks to make it fast and
stable
Test framework design scheme
Desired test automation state
• It is properly distributed between levels (unit,
integration, UI), so
• It takes reasonable amount of time to run, to
• Provide reasonably accurate quality feedback
about the product
Effective test automation - TL/DR
• Focus: Speed
• < 20 minutes to run
• Catches 99% of Blocker issues
Reminder: currently ~50% and 4 hours to run
Adequate coverage
• Catches 99% of Blocker issues
•blocker is an issue that stops us from moving to prod
100%
• There's no "duplicated" coverage
no double checking
We know the problem, what would be
solution?
«Бессмыслица — искать решение, если оно и так
есть.
Речь идет о том, как поступать с задачей, которая
решения не имеет.»
(Братья Стругацкие «Понедельник начинается в
субботу»)
Typical solution
Scaling Test pyramid
Manual
Slow
Slow'n'flaky
Proper auto tests
А. Солнцев, Эффективный
процесс тестирования
Framework scaling
• Scaling is not free (additional environment and people reources are
necessary)
• Scaling will not work out of the box
• Scaling does not guarantee expected end reasults
• If you scale inefficient process you get inefficient process at-scale
Test pyramid
• One can't test untestable (testability was not among priorities when
architecture decisions were made)
• Writing unit and integration tests for code implemented long time
ago by somebody else is pain in the ass and requires refactoring
• Refactoring looks more like re-implementation
• We can't decrease velocity
In addition
• Non of the approaches addresses
coverage issue
Feature Tests Model
Feature test (#featuretests) - A test verifying a service or library as the customer
would use it, but within a single process. A few examples:
• Given a search service that returns tweets based on a query, a test that feeds
in a fake tweet, queries for that tweet, and verifies it’s found. This is clearly
not a single class test
• A test verifying a library such as netty using its public APIs only, perhaps
mocking JDK APIs for failure testing
Feature Tests ideas
• Unit/Integration tests are vague and arguable.
• Mockist vs. Classicist/Behaviour vs. State/Solitary vs Social unit tests
• Value of unit/integration tests in assuring (external) quality is uncertain
• Test size may be a better indicator of test speed and stability that test
level
https://blog.twitter.com/engineering/en_us/topics/insights/2017/the-testing-
renaissance.html
Two different views on Test Automation and
Feature tests
• Feature tests looks like a good "middle ground"
between solely UI and unit-test focused test
automation
• Looks like a good way to avoid overengineering
(adding unnecessary unit-test)
• Promises us to be able to map manual test
scenarios to automated scenarios
Why it may work?
• ...no promises. Nobody knows if it going to work
• I couldn't find better alternative yet
• Long upfront planning didn't yield significant results yet
• It is just an experiment to see if it moves us into the right direction, so
we can adjust afterwards
Show me the code
Two rules of Feature test Model
• If something is tested on a upper
level it should not be "double-
checked" on a level below
• Disregard of test level test should
verify some specific feauture
Results'n'Lessons learnt
Lesson Possible solution/Comments
Writing "feature test" on a level below UI
after implementation is hard and usually
does not make much sense
Try A.T.D.D.
It is easy to communicate something and
become a bottleneck
• If something is possible to understand
in a wrong way – it will be understood
in a wrong way
• Changes like this is impossible to do on
your own
Progress is slow There's tones of code. Not easy to change
something fast.
• Do we need more people?
• Do we need to work on skills?
• Is the approach wrong itself?
Next steps
• Carry on
• BVT (Ne vse testy odinakovo polezny :-P )
• Delete redundant/obsolete features and code
• Unit/Integration test initiative
• Join us and suggest your way?
We're hiring
• Talents can hide from our
rectruters..
• Or join us (печеньки -
прошлое, у нас есть
блины!)
https://www.workfusion.com/careers
Should you have any questions
Alex Pushkarev (Саша)
http://aqaguy.blogspot.com
https://www.linkedin.com/in/alexpushkarev/
https://twitter.com/aqaguy

Presentation delex

  • 1.
    Quest for anadequate autotest ...Coverage? A. Pushkarev 2018
  • 2.
    Who am I?Alex Pushkarev (Саша) • ~ 11 years in IT • Software Engineer [In test] • Agile fan, XP practitioner • Context-driven test school • Test automation tech lead at WorkFusion
  • 3.
    WorkFusion WorkFusion is asoftware robots company. We use AI to help large companies do with technology what they did before with people. • Smart Process Automation • Chatbots • SmartCrowd • RPA Express
  • 4.
    WorkFusion specifics thatnot visible outside • Complex domain with lots of innovations (we have nobody to copy or replicate) • Market is new and not well formed, yet with extremely high competition already (speed – is everything) • Real world and restricted resources (we can't hire +500 people and "just do it")
  • 5.
    Meet Control Tower •An orchestrator between different small applications and modules • Oldest application with lots of legacy code • Layered architecture with specifics • Most of the time deployed "On-premise" with little or no control from our side
  • 6.
    Regression Testing atPlatform • Takes up to 10 man/weeks • Only 30 percent automated • TestRail as Test Management tool
  • 7.
    Test Automation atPlatform • Layered ("Tiered") framework in a separate repository • Driven by dedicated "test developer" specialists/team • UI test automation focused • Sophisticated report analysis/rerun tool hand-made and clumsy
  • 8.
    Test Automation coverage Bylines of code: • unit and integration tests - < 30% • UI tests - < 60% By test cases • unit and integration tests - Unknown • UI tests - > 35%
  • 9.
    Test automation effectiveness •~2500 autotests • 4 hours for test to run • ~20 tests fail each run • Lots of dirty hacks to make it fast and stable
  • 10.
  • 11.
    Desired test automationstate • It is properly distributed between levels (unit, integration, UI), so • It takes reasonable amount of time to run, to • Provide reasonably accurate quality feedback about the product
  • 12.
    Effective test automation- TL/DR • Focus: Speed • < 20 minutes to run • Catches 99% of Blocker issues Reminder: currently ~50% and 4 hours to run
  • 13.
    Adequate coverage • Catches99% of Blocker issues •blocker is an issue that stops us from moving to prod 100% • There's no "duplicated" coverage no double checking
  • 14.
    We know theproblem, what would be solution? «Бессмыслица — искать решение, если оно и так есть. Речь идет о том, как поступать с задачей, которая решения не имеет.» (Братья Стругацкие «Понедельник начинается в субботу»)
  • 15.
    Typical solution Scaling Testpyramid Manual Slow Slow'n'flaky Proper auto tests А. Солнцев, Эффективный процесс тестирования
  • 16.
    Framework scaling • Scalingis not free (additional environment and people reources are necessary) • Scaling will not work out of the box • Scaling does not guarantee expected end reasults • If you scale inefficient process you get inefficient process at-scale
  • 17.
    Test pyramid • Onecan't test untestable (testability was not among priorities when architecture decisions were made) • Writing unit and integration tests for code implemented long time ago by somebody else is pain in the ass and requires refactoring • Refactoring looks more like re-implementation • We can't decrease velocity
  • 18.
    In addition • Nonof the approaches addresses coverage issue
  • 19.
    Feature Tests Model Featuretest (#featuretests) - A test verifying a service or library as the customer would use it, but within a single process. A few examples: • Given a search service that returns tweets based on a query, a test that feeds in a fake tweet, queries for that tweet, and verifies it’s found. This is clearly not a single class test • A test verifying a library such as netty using its public APIs only, perhaps mocking JDK APIs for failure testing
  • 20.
    Feature Tests ideas •Unit/Integration tests are vague and arguable. • Mockist vs. Classicist/Behaviour vs. State/Solitary vs Social unit tests • Value of unit/integration tests in assuring (external) quality is uncertain • Test size may be a better indicator of test speed and stability that test level https://blog.twitter.com/engineering/en_us/topics/insights/2017/the-testing- renaissance.html
  • 21.
    Two different viewson Test Automation and Feature tests • Feature tests looks like a good "middle ground" between solely UI and unit-test focused test automation • Looks like a good way to avoid overengineering (adding unnecessary unit-test) • Promises us to be able to map manual test scenarios to automated scenarios
  • 22.
    Why it maywork? • ...no promises. Nobody knows if it going to work • I couldn't find better alternative yet • Long upfront planning didn't yield significant results yet • It is just an experiment to see if it moves us into the right direction, so we can adjust afterwards
  • 23.
    Show me thecode Two rules of Feature test Model • If something is tested on a upper level it should not be "double- checked" on a level below • Disregard of test level test should verify some specific feauture
  • 24.
    Results'n'Lessons learnt Lesson Possiblesolution/Comments Writing "feature test" on a level below UI after implementation is hard and usually does not make much sense Try A.T.D.D. It is easy to communicate something and become a bottleneck • If something is possible to understand in a wrong way – it will be understood in a wrong way • Changes like this is impossible to do on your own Progress is slow There's tones of code. Not easy to change something fast. • Do we need more people? • Do we need to work on skills? • Is the approach wrong itself?
  • 25.
    Next steps • Carryon • BVT (Ne vse testy odinakovo polezny :-P ) • Delete redundant/obsolete features and code • Unit/Integration test initiative • Join us and suggest your way?
  • 26.
    We're hiring • Talentscan hide from our rectruters.. • Or join us (печеньки - прошлое, у нас есть блины!) https://www.workfusion.com/careers
  • 27.
    Should you haveany questions Alex Pushkarev (Саша) http://aqaguy.blogspot.com https://www.linkedin.com/in/alexpushkarev/ https://twitter.com/aqaguy

Editor's Notes

  • #4 What is adequate coverage 
  • #5 What is adequate coverage 
  • #6 What is adequate coverage 
  • #7 What is adequate coverage 
  • #8 What is adequate coverage 
  • #9 What is adequate coverage 
  • #10 What is adequate coverage 
  • #11 What is adequate coverage 
  • #12 What is adequate coverage 
  • #13 What is adequate coverage 
  • #14 What is adequate coverage 
  • #15 What is adequate coverage 
  • #16 User allocation service  - cool work for test automation. Seemed to be difficult – asked devs where the logic coded so I can add hook for dev environment Turned out to be a bug
  • #17 User allocation service  - cool work for test automation. Seemed to be difficult – asked devs where the logic coded so I can add hook for dev environment Turned out to be a bug
  • #18 User allocation service  - cool work for test automation. Seemed to be difficult – asked devs where the logic coded so I can add hook for dev environment Turned out to be a bug
  • #19 User allocation service  - cool work for test automation. Seemed to be difficult – asked devs where the logic coded so I can add hook for dev environment Turned out to be a bug
  • #20 User allocation service  - cool work for test automation. Seemed to be difficult – asked devs where the logic coded so I can add hook for dev environment Turned out to be a bug
  • #21 User allocation service  - cool work for test automation. Seemed to be difficult – asked devs where the logic coded so I can add hook for dev environment Turned out to be a bug
  • #22 Typical test automation – separate repository, dedicated AQA resources, UI level automation, Layered Framework, Sofisticated report tool
  • #23 User allocation service  - cool work for test automation. Seemed to be difficult – asked devs where the logic coded so I can add hook for dev environment Turned out to be a bug
  • #24 Представим обычный, сферический в вакууме проект. У нас есть какая-то (к примеру, скрам) команда. Девелоперы делают свою работу, Тестеры - свою, менеджеры - менеджат, занимаются qpf и pdp. Стори закрываются, тесты зеленые, все при деле и счастливы. Ну и тут Внезапно!!!
  • #25  И следующее, что может произойти, это появится вот такой вот злобный чел, и скажет нам, что на проекте есть проблемы с качеством, и надо их решать
  • #26  И следующее, что может произойти, это появится вот такой вот злобный чел, и скажет нам, что на проекте есть проблемы с качеством, и надо их решать