Anton Semenchenko
Сравнительный анализ
инструментов
Автоматизации Desktop
AUT
Agenda, part 1 (general)
1. Problem
2. Solutions 2016
Agenda, part 2 (tools and criteria's)
1. Tools to be compared (15)
2. How  why we selected this list of tools?
3. Comparison criteria types (3)
4. Stakeholders oriented comparison criteria (7)
5. Mixed comparison criteria (7)
6. Tech stuff oriented comparison criteria (8)
7. How  why we selected these lists of criteria's?
8. How to select proper criteria's for your project
Agenda, part 3 (comparison analyses)
1. Mixed comparison criteria
2. Tech stuff oriented comparison criteria
3. Stakeholders oriented comparison criteria
4. Define our “standard” context
5. Summarized scores
6. How to calculate scores
7. How to use scores / presentation
8. 4 summarized tables
Agenda, part 4 (tools, “how to” and examples)
1. How to define proper tool based on selected criteria's
2. How to link information from presentation to QA Automation
metrics
3. How to link information from presentation to Project Health Check
4. How to link information from presentation to QA Automation ROI
5. Tools tiny overview
6. Tools overview structure
7. Example of tool usage structure
Agenda, part 5 (trends, science and “what’s next”)
1. Define a Trend! Is it possible ..?
2. Trend – an option
3. Why so?
4. What’s next
Problem
• There is an implicit leader for Web automation
Problem
• It’s not that simple if to talk about desktop apps
Tools to be compared
• TestComplete Desktop
• Unified Functional Testing (UFT)
• Ranorex
• Telerik Test Studio
• Zeenyx AscentialTest
• MS VS Coded UI
• CUIT
• AUTOIT
• Sikuli
• Jubula
• Robot Framework
• Winium
• WinAppDriver
• QTWebDriver
• PyWinAuto
How  why we selected this list of tools?
Comparison criteria types
1. Stakeholders oriented
2. Tech stuff oriented
3. Mixed
Stakeholders oriented comparison criteria
1. Approximate complexity of auto-test development
2. Approximate complexity of auto-test support
3. Approximate “entrance” level
4. Required technical skills level
5. Tests readability
6. How fast tests run
7. Ability to re-use "Business-Logic" layer in other technical context
Mixed comparison criteria
1. Supported platforms
2. Supported technologies
3. Licensing
4. Maturity
5. Record-Play system support
6. Standard actions pack
Tech stuff oriented comparison criteria
1. Programming languages support
2. Have tools for mapping
3. Self-Made architecture support
4. Data-Driven testing support
5. Test-Driven development support
6. Key-word driven
7. Behavior Driven Development support
8. Continues integration system support
How  why we selected these lists of criteria's?
How to select proper criteria's for your project
Mixed comparison criteria
Supported platforms – “the best” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
Supported platforms
Tool Platforms Mark
TestComplete Desktop Windows
Unified Functional
Testing
Windows
Ranorex Windows
Telerik Test Studio Windows
Zeenyx AscentialTest Windows
MS VS Coded UI ; CUIT Windows
AUTOIT Windows
Sikuli Windows, Unix-like Good
Jubula Windows, Unix-like Good
Robot Framework Windows, Unix-like Good
Winium / WinAppDriver
;
QTWebDriver
Windows / Windows;
Cross-Platform
/ ; Good
PyWinAuto Windows
Supported technologies – “the best” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10.Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
Supported technologies
Tool Technologies Mark
TestComplete Desktop C/C++, WinForms, WPF,
Java, Qt
Unified Functional Testing WinForms, WPF, Java, SAP
Ranorex WinForms, WPF, Java, Qt,
SAP
Telerik Test Studio WPF Bad
Zeenyx AscentialTest Win Forms, WPF, Java Bad
MS VS Coded UI ; CUIT Win Forms (partial), WPF Bad
AUTOIT OS level Good
Sikuli Image recognition based Good
Jubula WinForms, WPF, Java Bad
Robot Framework Uses AutoIT (and co inside) Good
Winium / WinAppDriver ;
QTWebDriver
WinForms, WPF / Any ; QT Bad
PyWinAuto Win32 API, WinForms
(partial, Win32 API bases)
Bad
Licensing – “the worst” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
Licensing
Tool License Mark
TestComplete Desktop Paid Bad
Unified Functional Testing Paid Bad
Ranorex Paid Bad
Telerik Test Studio Paid Bad
Zeenyx AscentialTest Paid Bad
MS VS Coded UI ; CUIT Paid Bad
AUTOIT Free
Sikuli Open source Good
Jubula Open source Good
Robot Framework Open source Good
Winium / WinAppDriver ;
QTWebDriver
Open source Good
PyWinAuto Open source Good
Maturity – “the worst” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10.Jubula
11.Robot Framework
12.Winium
13.WinAppDriver
14.QTWebDriver
15.PyWinAuto
Maturity
Tool Maturity
TestComplete Desktop Good
Unified Functional Testing Good
Ranorex Good
Telerik Test Studio Good
Zeenyx AscentialTest
MS VS Coded UI ; CUIT Good
AUTOIT
Sikuli
Jubula
Robot Framework
Winium / WinAppDriver ;
QTWebDriver
Bad
PyWinAuto
Record-Play support – do we really need it?
Record-Play support
Tool Record-Play Mark
TestComplete Desktop Yes Good
Unified Functional
Testing
Yes Good
Ranorex Yes Good
Telerik Test Studio Yes Good
Zeenyx AscentialTest No
MS VS Coded UI ; CUIT No
AUTOIT No
Sikuli No
Jubula No
Robot Framework No
Winium / WinAppDriver
;
QTWebDriver
No
PyWinAuto No
Standard actions pack – do we really need it?
Standard actions pack
Tool STD actions Mark
TestComplete Desktop No
Unified Functional Testing No
Ranorex No
Telerik Test Studio No
Zeenyx AscentialTest Yes Good
MS VS Coded UI ; CUIT No
AUTOIT No
Sikuli Yes Good
Jubula Yes Good
Robot Framework No
Winium / WinAppDriver ;
QTWebDriver
No
PyWinAuto Yes / No (via SWAPY)
Tech stuff oriented comparison criteria
Programming languages – “the best” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
Programming languages support
Tool Language Mark
TestComplete Desktop Python, C#Script, JScript,
C++Script, VBScript,
DelphiScript
Good
Unified Functional
Testing
VBScript Bad
Ranorex C#, VB.Net
Telerik Test Studio C#, VB.Net
Zeenyx AscentialTest Own DSL Bad
MS VS Coded UI ; CUIT C#, VB.Net
AUTOIT Own Basic-like language Bad
Sikuli Jython, Java
Jubula -
Robot Framework Own DSL, Java, Python
Winium / WinAppDriver ;
QTWebDriver
Java, JavaScript, PHP,
Python, Ruby, C#
Good
PyWinAuto CPython
Tools for mapping – “the best” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
Tools for mapping
Tool Tools for mapping Mark
TestComplete Desktop Yes Good
Unified Functional Testing Yes Good
Ranorex Yes Good
Telerik Test Studio Yes Good
Zeenyx AscentialTest Yes / No Good
MS VS Coded UI ; CUIT No
AUTOIT No
Sikuli Yes / No
Jubula Yes Good
Robot Framework No
Winium / WinAppDriver ;
QTWebDriver
No
PyWinAuto No
Custom architecture – “the worst” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
Custom architecture
Tool Custom architecture Mark
TestComplete Desktop Yes / No
Unified Functional Testing Yes / No
Ranorex Yes / No
Telerik Test Studio Yes / No
Zeenyx AscentialTest No Bad
MS VS Coded UI ; CUIT Yes Good
AUTOIT No Bad
Sikuli Yes Good
Jubula No / Yes
Robot Framework Yes Good
Winium / WinAppDriver ;
QTWebDriver
Yes Good
PyWinAuto Yes Good
DDT support – “the worst” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
DDT support
Tool DDT support Mark
TestComplete Desktop Yes Good
Unified Functional Testing Yes Good
Ranorex Yes Good
Telerik Test Studio Yes Good
Zeenyx AscentialTest Yes Good
MS VS Coded UI ; CUIT Yes Good
AUTOIT No Bad
Sikuli Yes / No Good
Jubula Yes Good
Robot Framework Yes Good
Winium / WinAppDriver ;
QTWebDriver
Yes Good
PyWinAuto Yes Good
TDD support – “the worst” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10.Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
TDD support
Tool TDD Mark
TestComplete Desktop Yes / No Good
Unified Functional Testing Yes / No Good
Ranorex Yes / No Good
Telerik Test Studio Yes / No Good
Zeenyx AscentialTest No Bad
MS VS Coded UI ; CUIT Yes Good
AUTOIT No Bad
Sikuli Yes / No Good
Jubula Yes Good
Robot Framework Yes Good
Winium / WinAppDriver ;
QTWebDriver
Yes Good
PyWinAuto Yes Good
Key-word driven – “the best” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
Key-word driven support
Tool Key-word Mark
TestComplete Desktop No Bad
Unified Functional Testing No Bad
Ranorex Yes / No
Telerik Test Studio Yes / No
Zeenyx AscentialTest Yes Good
MS VS Coded UI ; CUIT Yes / No
AUTOIT No Bad
Sikuli Yes / No
Jubula No Bad
Robot Framework Yes Good
Winium / WinAppDriver ;
QTWebDriver
Yes / No
PyWinAuto Yes / No
BDD support – “the worst” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
BDD support
Tool BDD Mark
TestComplete Desktop No Bad
Unified Functional Testing No Bad
Ranorex Yes Good
Telerik Test Studio Yes Good
Zeenyx AscentialTest No Bad
MS VS Coded UI ; CUIT Yes Good
AUTOIT No Bad
Sikuli Yes Good
Jubula No Bad
Robot Framework Yes / No
Winium / WinAppDriver ;
QTWebDriver
Yes Good
PyWinAuto Yes Good
CI support – “the worst” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
CI support
Tool CI Mark
TestComplete Desktop Automated Build Studio
Unified Functional Testing Jenkins plugin
Ranorex Jenkins
Telerik Test Studio Bamboo
Zeenyx AscentialTest Test Execution
Management
MS VS Coded UI ; CUIT Any Good
AUTOIT - / Any
Sikuli - / Any Java-compatible
Jubula No Bad
Robot Framework Jenkins plugin
Winium / WinAppDriver ;
QTWebDriver
Any Good
PyWinAuto Any Good
Stakeholders oriented comparison criteria
Define our “standard” context
Approximate complexity of auto-test development
Tool Development Mark
TestComplete Desktop ~3h
Unified Functional Testing ~3h
Ranorex ~2h Good
Telerik Test Studio ~2h Good
Zeenyx AscentialTest ~2h Good
MS VS Coded UI ; CUIT ~3h ; 2h ; Good
AUTOIT ~1h Good
Sikuli ~2h Good
Jubula ~2h Good
Robot Framework ~4h
Winium / WinAppDriver ;
QTWebDriver
~3h / 6h -> 2h / Bad -> Good
PyWinAuto ~1h Good
Approximate complexity of auto-test
support (per year)
Tool Support Mark
TestComplete Desktop ~3h Bad
Unified Functional Testing ~3h Bad
Ranorex ~2h Good
Telerik Test Studio ~2h Good
Zeenyx AscentialTest ~3h Bad
MS VS Coded UI ; CUIT ~2h ; 1h Good
AUTOIT ~4h Bad
Sikuli ~5h Bad
Jubula ~2h Good
Robot Framework ~1h Good
Winium / WinAppDriver ;
QTWebDriver
~2h / 10h -> 1h Good / Bad -> Good
PyWinAuto ~2h Good
Approximate “entrance” level – “the best” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10.Jubula
11.Robot Framework
12.Winium
13.WinAppDriver
14.QTWebDriver
15.PyWinAuto
Approximate “entrance” level
Tool Level
TestComplete Desktop High
Unified Functional Testing High
Ranorex
Telerik Test Studio
Zeenyx AscentialTest
MS VS Coded UI ; CUIT High
AUTOIT Low
Sikuli Low
Jubula
Robot Framework High
Winium / WinAppDriver ;
QTWebDriver
High ->
PyWinAuto
Required “technical skills” level – “the best” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
Required “technical skills” level
Tool Level
TestComplete Desktop
Unified Functional Testing
Ranorex
Telerik Test Studio
Zeenyx AscentialTest Low
MS VS Coded UI ; CUIT High ;
AUTOIT Low
Sikuli Low
Jubula Low
Robot Framework High
Winium / WinAppDriver ;
QTWebDriver
High ->
PyWinAuto Low
Test readability – “the worst” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
Test readability
Tool Level
TestComplete Desktop
Unified Functional Testing
Ranorex
Telerik Test Studio
Zeenyx AscentialTest High
MS VS Coded UI ; CUIT
AUTOIT Low
Sikuli High
Jubula High
Robot Framework - > High
Winium / WinAppDriver ;
QTWebDriver
- > High
PyWinAuto High
How fast tests run – “the best” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
How fast tests run
Tool Level
TestComplete Desktop Bad
Unified Functional Testing Bad
Ranorex
Telerik Test Studio
Zeenyx AscentialTest
MS VS Coded UI ; CUIT Good
AUTOIT Good
Sikuli Bad
Jubula Bad
Robot Framework Good
Winium / WinAppDriver ;
QTWebDriver
Good
PyWinAuto Good
Ability to re-use "Business-Logic" layer
Tool “BDD” Mark
TestComplete Desktop No Bad
Unified Functional Testing No Bad
Ranorex Yes Good
Telerik Test Studio Yes Good
Zeenyx AscentialTest No Bad
MS VS Coded UI ; CUIT Yes Good
AUTOIT No Bad
Sikuli Yes Good
Jubula No Bad
Robot Framework Yes Good
Winium / WinAppDriver ;
QTWebDriver
Yes Good
PyWinAuto Yes Good
Summarized scores
How to calculate scores
How to use scores 
Stakeholders oriented score
Tool Score
TestComplete Desktop -2
Unified Functional Testing -2
Ranorex +3
Telerik Test Studio +3
Zeenyx AscentialTest +1
MS VS Coded UI ; CUIT +1
AUTOIT +1
Sikuli +3
Jubula +2
Robot Framework +2
Winium / WinAppDriver ;
QTWebDriver
+2
PyWinAuto +6
Mixed score
Tool Score
TestComplete Desktop +1
Unified Functional Testing +1
Ranorex +1
Telerik Test Studio 0
Zeenyx AscentialTest -1
MS VS Coded UI ; CUIT -1
AUTOIT +1
Sikuli +4
Jubula +1
Robot Framework +2
Winium / WinAppDriver ;
QTWebDriver
-2
PyWinAuto -1
Tech stuff oriented score
Tool Score
TestComplete Desktop +2
Unified Functional Testing 0
Ranorex +4
Telerik Test Studio +4
Zeenyx AscentialTest -1
MS VS Coded UI ; CUIT +4
AUTOIT -6
Sikuli +4
Jubula +1
Robot Framework +4
Winium / WinAppDriver ;
QTWebDriver
+6
PyWinAuto +5
Summarized score
Tool Stub
TestComplete Desktop +1
Unified Functional Testing -1
Ranorex +8
Telerik Test Studio +7
Zeenyx AscentialTest -1
MS VS Coded UI ; CUIT +4
AUTOIT -4
Sikuli +11
Jubula +4
Robot Framework +8
Winium / WinAppDriver ;
QTWebDriver
+6
PyWinAuto +10
How to define proper tool based on selected
criteria's
How to
1. link information from presentation to QA Automation metrics
2. link information from presentation to Project Health Check
3. link information from presentation to QA Automation ROI
Tools tiny overview
Tools overview structure
1. Pros
2. Cons
3. What kind of project / product / problem / situation
certain tools could be used for!
Example of tool usage structure
1. Plus several examples of each tool usage
— Example structure:
• Values:
— Value the individual
— Act as a team
— Strive for excellence
— Focus on customer
— Act with integrity
• Prisms:
— Technology
— Delivery
— Leadership
Project A
Project A
Project A
Test Complete Desktop
1. Pros
• Low entrance level
• High level of test scripts’ flexibility
• Huge knowledge base (at about MSDN level)
• Wide choice of script languages which look like common languages
2. Cons
• Very expensive license
• Very specific own script languages
Unified functional testing
1. Pros
• Low “entrance” level
• High level of test scripts’ flexibility
• Good tech support
2. Cons
• Strict integration with other HP solutions
• Very specific own DSL
Ranorex
1. Pros
• Low “entrance” level
• Script tests are written on common languages (C#, VB,Net)
• Good tech support
2. Cons
• Paid license
Telerik Test Studio (Desktop)
1. Pros
• Low “entrance” level
• Great parameterization of Keyword tests
• DDT support using common formats (CSV, XLS, DB)
• Converting tests to common languages (C#, VB.NET)
2. Cons
• Only WPF-applications
Zeenyx
1. Pros
• Supporting complex logic
• Great organization of DDT
• Using standard .Net libraries support
2. Cons
• Need time to learn how to use
• Specific own DSL
MS VS Coded UI
1. Pros
• “Native” for Windows
• Supports a huge set of UI technologies
• Generated UI Map
• Ready to go infrastructure
• Good documentation and support
2. Cons
• License cost
• Relatively “low level” API
MS VS Coded UI + CUIT
1. Pros
• The same as for MS VS Coded UI
• Elegant “High level” API
2. Cons
• The same as for MS VS Coded UI
AutoIT
1. Pros
• Easy
• Universal
• Free
2. Cons
• There is no ready-to-use verification instruments
• Test = exe file
• There is no ready-to-use reports
Sikuli
1. Pros
• IDE is easy to learn and use
• Standard actions pack
• Supports an ability to write tests using common languages (Java, Python)
• Supports an ability to work on different platforms and with any applications
• Free
2. Cons
• Low test’s reliability
• Slow tests work
• No ability to work with texts
• Complicated to support tests
Jubula
1. Pros
• IDE is easy to use
• Supports an ability to work on requirement base
• Integrated DB for storing test data and results
• Free
2. Cons
• No flexibility which is ingrain to script tests
• No CI support
Robot Framework
1. Pros
• Their own not complex and easy-to-read Keyword-based language
• Plugins for different IDE’s
• Work with different Oss
• Different programming languages support
• Tools for creating user-own libraries
• Free
2. Cons
• High entrance level
Winium
1. Pros
• Familiar syntax and API
• Supports all the languages that are supported by Selenium WebDriver
• Free
2. Cons
• “Immature” testing tool
• Incomplete way of locating elements
• A lack of documentation
WinAppDriver
1. Pros
• Familiar syntax and API
• “Native” for Windows
• Free
2. Cons
• “Immature” testing tool
• Complicated (in special case usage)
• A lack of documentation
QTWebDriver
1. Pros
• Familiar syntax and API
• QT Applications oriented / “Native” (unique tool)
• Free
2. Cons
• “Immature” testing tool
• Complicated (in special case usage)
• A lack of documentation
PyWinAuto
1. Pros
• Extremely simple to use
• Easy to support
• Free
2. Cons
• Do not support all popular UI technologies
• CPython only
Define a Trend! Is it possible ..?
Trend
1. There is a potential leader for Desktop Automation
Why so?
“Scientific” technical prove of Trend
• Hegel’s dialectics
• Bifurcation mathematical apparat (Bifurcation Theory)
• Sedov’s law of hierarchical compensation
• Pannov-Snuks Vertical
• Big History
Why so?
Non-technical scientific prove of Trend
• Peter Drucker “Management. Challenges for the 21st Century”
Note: It’s a topic of the whole big conversation, and I’m sure we’re going to
get back to it, but not today…
How to
1. use this presentation on different project phases
2. use this presentation based on main project roles
What’s next (just a possible way)
• Shu
1. Use Presentation
1. Please, follow recommendations
a) “How to select proper criteria's for your project”
b) “How to define proper tool based on selected criteria's”
c) “How to link information from presentation to QA Automation metrics”
d) “How to link information from presentation to Project Health Check”
e) “How to link information from presentation to QA Automation ROI”
f) “How to use this presentation on different project phases”
g) “how to use this presentation based on main project roles”
What’s next
• Ha
1. Update a set of criteria's
2. Update a set of tools
3. Update Presentation
4. Read “Scientific” prove of Trend
What’s next
• Ri
1. Re-Read “Scientific” prove of Trend
2. Update a set of criteria's
3. Update a set of tools
4. Update Presentation
5. Predict the “Trend”
6. Manage the “Trend”
Next iteration 
• Move from static (Presentation) to dynamic (Application)
• For example, “https://telescope.epam.com”
CONTACT ME
Anton_Semenchenko@epam.com
semenchenko_anton_v
https://www.linkedin.com/in/anton-
semenchenko-612a926b
https://www.facebook.com/semenche
nko.anton.v
https://twitter.com/comaqa
Thanks for your attention
Anton Semenchenko
DPI.Solutions
EPAM Systems
www.comaqa.by
www.corehard.by

Антон Семенченко | (EPAM Systems, DPI.Solutions )Сравнительный анализ инструментов Desktop-ной автоматизации

  • 1.
  • 2.
    Agenda, part 1(general) 1. Problem 2. Solutions 2016
  • 3.
    Agenda, part 2(tools and criteria's) 1. Tools to be compared (15) 2. How why we selected this list of tools? 3. Comparison criteria types (3) 4. Stakeholders oriented comparison criteria (7) 5. Mixed comparison criteria (7) 6. Tech stuff oriented comparison criteria (8) 7. How why we selected these lists of criteria's? 8. How to select proper criteria's for your project
  • 4.
    Agenda, part 3(comparison analyses) 1. Mixed comparison criteria 2. Tech stuff oriented comparison criteria 3. Stakeholders oriented comparison criteria 4. Define our “standard” context 5. Summarized scores 6. How to calculate scores 7. How to use scores / presentation 8. 4 summarized tables
  • 5.
    Agenda, part 4(tools, “how to” and examples) 1. How to define proper tool based on selected criteria's 2. How to link information from presentation to QA Automation metrics 3. How to link information from presentation to Project Health Check 4. How to link information from presentation to QA Automation ROI 5. Tools tiny overview 6. Tools overview structure 7. Example of tool usage structure
  • 6.
    Agenda, part 5(trends, science and “what’s next”) 1. Define a Trend! Is it possible ..? 2. Trend – an option 3. Why so? 4. What’s next
  • 7.
    Problem • There isan implicit leader for Web automation
  • 8.
    Problem • It’s notthat simple if to talk about desktop apps
  • 9.
    Tools to becompared • TestComplete Desktop • Unified Functional Testing (UFT) • Ranorex • Telerik Test Studio • Zeenyx AscentialTest • MS VS Coded UI • CUIT • AUTOIT • Sikuli • Jubula • Robot Framework • Winium • WinAppDriver • QTWebDriver • PyWinAuto
  • 10.
    How whywe selected this list of tools?
  • 11.
    Comparison criteria types 1.Stakeholders oriented 2. Tech stuff oriented 3. Mixed
  • 12.
    Stakeholders oriented comparisoncriteria 1. Approximate complexity of auto-test development 2. Approximate complexity of auto-test support 3. Approximate “entrance” level 4. Required technical skills level 5. Tests readability 6. How fast tests run 7. Ability to re-use "Business-Logic" layer in other technical context
  • 13.
    Mixed comparison criteria 1.Supported platforms 2. Supported technologies 3. Licensing 4. Maturity 5. Record-Play system support 6. Standard actions pack
  • 14.
    Tech stuff orientedcomparison criteria 1. Programming languages support 2. Have tools for mapping 3. Self-Made architecture support 4. Data-Driven testing support 5. Test-Driven development support 6. Key-word driven 7. Behavior Driven Development support 8. Continues integration system support
  • 15.
    How whywe selected these lists of criteria's?
  • 16.
    How to selectproper criteria's for your project
  • 17.
  • 18.
    Supported platforms –“the best” tool? 1. TestComplete Desktop 2. Unified Functional Testing (UFT) 3. Ranorex 4. Telerik Test Studio 5. Zeenyx AscentialTest 6. MS VS Coded UI 7. CUIT 8. AUTOIT 9. Sikuli 10. Jubula 11. Robot Framework 12. Winium 13. WinAppDriver 14. QTWebDriver 15. PyWinAuto
  • 19.
    Supported platforms Tool PlatformsMark TestComplete Desktop Windows Unified Functional Testing Windows Ranorex Windows Telerik Test Studio Windows Zeenyx AscentialTest Windows MS VS Coded UI ; CUIT Windows AUTOIT Windows Sikuli Windows, Unix-like Good Jubula Windows, Unix-like Good Robot Framework Windows, Unix-like Good Winium / WinAppDriver ; QTWebDriver Windows / Windows; Cross-Platform / ; Good PyWinAuto Windows
  • 20.
    Supported technologies –“the best” tool? 1. TestComplete Desktop 2. Unified Functional Testing (UFT) 3. Ranorex 4. Telerik Test Studio 5. Zeenyx AscentialTest 6. MS VS Coded UI 7. CUIT 8. AUTOIT 9. Sikuli 10.Jubula 11. Robot Framework 12. Winium 13. WinAppDriver 14. QTWebDriver 15. PyWinAuto
  • 21.
    Supported technologies Tool TechnologiesMark TestComplete Desktop C/C++, WinForms, WPF, Java, Qt Unified Functional Testing WinForms, WPF, Java, SAP Ranorex WinForms, WPF, Java, Qt, SAP Telerik Test Studio WPF Bad Zeenyx AscentialTest Win Forms, WPF, Java Bad MS VS Coded UI ; CUIT Win Forms (partial), WPF Bad AUTOIT OS level Good Sikuli Image recognition based Good Jubula WinForms, WPF, Java Bad Robot Framework Uses AutoIT (and co inside) Good Winium / WinAppDriver ; QTWebDriver WinForms, WPF / Any ; QT Bad PyWinAuto Win32 API, WinForms (partial, Win32 API bases) Bad
  • 22.
    Licensing – “theworst” tool? 1. TestComplete Desktop 2. Unified Functional Testing (UFT) 3. Ranorex 4. Telerik Test Studio 5. Zeenyx AscentialTest 6. MS VS Coded UI 7. CUIT 8. AUTOIT 9. Sikuli 10. Jubula 11. Robot Framework 12. Winium 13. WinAppDriver 14. QTWebDriver 15. PyWinAuto
  • 23.
    Licensing Tool License Mark TestCompleteDesktop Paid Bad Unified Functional Testing Paid Bad Ranorex Paid Bad Telerik Test Studio Paid Bad Zeenyx AscentialTest Paid Bad MS VS Coded UI ; CUIT Paid Bad AUTOIT Free Sikuli Open source Good Jubula Open source Good Robot Framework Open source Good Winium / WinAppDriver ; QTWebDriver Open source Good PyWinAuto Open source Good
  • 24.
    Maturity – “theworst” tool? 1. TestComplete Desktop 2. Unified Functional Testing (UFT) 3. Ranorex 4. Telerik Test Studio 5. Zeenyx AscentialTest 6. MS VS Coded UI 7. CUIT 8. AUTOIT 9. Sikuli 10.Jubula 11.Robot Framework 12.Winium 13.WinAppDriver 14.QTWebDriver 15.PyWinAuto
  • 25.
    Maturity Tool Maturity TestComplete DesktopGood Unified Functional Testing Good Ranorex Good Telerik Test Studio Good Zeenyx AscentialTest MS VS Coded UI ; CUIT Good AUTOIT Sikuli Jubula Robot Framework Winium / WinAppDriver ; QTWebDriver Bad PyWinAuto
  • 26.
    Record-Play support –do we really need it?
  • 27.
    Record-Play support Tool Record-PlayMark TestComplete Desktop Yes Good Unified Functional Testing Yes Good Ranorex Yes Good Telerik Test Studio Yes Good Zeenyx AscentialTest No MS VS Coded UI ; CUIT No AUTOIT No Sikuli No Jubula No Robot Framework No Winium / WinAppDriver ; QTWebDriver No PyWinAuto No
  • 28.
    Standard actions pack– do we really need it?
  • 29.
    Standard actions pack ToolSTD actions Mark TestComplete Desktop No Unified Functional Testing No Ranorex No Telerik Test Studio No Zeenyx AscentialTest Yes Good MS VS Coded UI ; CUIT No AUTOIT No Sikuli Yes Good Jubula Yes Good Robot Framework No Winium / WinAppDriver ; QTWebDriver No PyWinAuto Yes / No (via SWAPY)
  • 30.
    Tech stuff orientedcomparison criteria
  • 31.
    Programming languages –“the best” tool? 1. TestComplete Desktop 2. Unified Functional Testing (UFT) 3. Ranorex 4. Telerik Test Studio 5. Zeenyx AscentialTest 6. MS VS Coded UI 7. CUIT 8. AUTOIT 9. Sikuli 10. Jubula 11. Robot Framework 12. Winium 13. WinAppDriver 14. QTWebDriver 15. PyWinAuto
  • 32.
    Programming languages support ToolLanguage Mark TestComplete Desktop Python, C#Script, JScript, C++Script, VBScript, DelphiScript Good Unified Functional Testing VBScript Bad Ranorex C#, VB.Net Telerik Test Studio C#, VB.Net Zeenyx AscentialTest Own DSL Bad MS VS Coded UI ; CUIT C#, VB.Net AUTOIT Own Basic-like language Bad Sikuli Jython, Java Jubula - Robot Framework Own DSL, Java, Python Winium / WinAppDriver ; QTWebDriver Java, JavaScript, PHP, Python, Ruby, C# Good PyWinAuto CPython
  • 33.
    Tools for mapping– “the best” tool? 1. TestComplete Desktop 2. Unified Functional Testing (UFT) 3. Ranorex 4. Telerik Test Studio 5. Zeenyx AscentialTest 6. MS VS Coded UI 7. CUIT 8. AUTOIT 9. Sikuli 10. Jubula 11. Robot Framework 12. Winium 13. WinAppDriver 14. QTWebDriver 15. PyWinAuto
  • 34.
    Tools for mapping ToolTools for mapping Mark TestComplete Desktop Yes Good Unified Functional Testing Yes Good Ranorex Yes Good Telerik Test Studio Yes Good Zeenyx AscentialTest Yes / No Good MS VS Coded UI ; CUIT No AUTOIT No Sikuli Yes / No Jubula Yes Good Robot Framework No Winium / WinAppDriver ; QTWebDriver No PyWinAuto No
  • 35.
    Custom architecture –“the worst” tool? 1. TestComplete Desktop 2. Unified Functional Testing (UFT) 3. Ranorex 4. Telerik Test Studio 5. Zeenyx AscentialTest 6. MS VS Coded UI 7. CUIT 8. AUTOIT 9. Sikuli 10. Jubula 11. Robot Framework 12. Winium 13. WinAppDriver 14. QTWebDriver 15. PyWinAuto
  • 36.
    Custom architecture Tool Customarchitecture Mark TestComplete Desktop Yes / No Unified Functional Testing Yes / No Ranorex Yes / No Telerik Test Studio Yes / No Zeenyx AscentialTest No Bad MS VS Coded UI ; CUIT Yes Good AUTOIT No Bad Sikuli Yes Good Jubula No / Yes Robot Framework Yes Good Winium / WinAppDriver ; QTWebDriver Yes Good PyWinAuto Yes Good
  • 37.
    DDT support –“the worst” tool? 1. TestComplete Desktop 2. Unified Functional Testing (UFT) 3. Ranorex 4. Telerik Test Studio 5. Zeenyx AscentialTest 6. MS VS Coded UI 7. CUIT 8. AUTOIT 9. Sikuli 10. Jubula 11. Robot Framework 12. Winium 13. WinAppDriver 14. QTWebDriver 15. PyWinAuto
  • 38.
    DDT support Tool DDTsupport Mark TestComplete Desktop Yes Good Unified Functional Testing Yes Good Ranorex Yes Good Telerik Test Studio Yes Good Zeenyx AscentialTest Yes Good MS VS Coded UI ; CUIT Yes Good AUTOIT No Bad Sikuli Yes / No Good Jubula Yes Good Robot Framework Yes Good Winium / WinAppDriver ; QTWebDriver Yes Good PyWinAuto Yes Good
  • 39.
    TDD support –“the worst” tool? 1. TestComplete Desktop 2. Unified Functional Testing (UFT) 3. Ranorex 4. Telerik Test Studio 5. Zeenyx AscentialTest 6. MS VS Coded UI 7. CUIT 8. AUTOIT 9. Sikuli 10.Jubula 11. Robot Framework 12. Winium 13. WinAppDriver 14. QTWebDriver 15. PyWinAuto
  • 40.
    TDD support Tool TDDMark TestComplete Desktop Yes / No Good Unified Functional Testing Yes / No Good Ranorex Yes / No Good Telerik Test Studio Yes / No Good Zeenyx AscentialTest No Bad MS VS Coded UI ; CUIT Yes Good AUTOIT No Bad Sikuli Yes / No Good Jubula Yes Good Robot Framework Yes Good Winium / WinAppDriver ; QTWebDriver Yes Good PyWinAuto Yes Good
  • 41.
    Key-word driven –“the best” tool? 1. TestComplete Desktop 2. Unified Functional Testing (UFT) 3. Ranorex 4. Telerik Test Studio 5. Zeenyx AscentialTest 6. MS VS Coded UI 7. CUIT 8. AUTOIT 9. Sikuli 10. Jubula 11. Robot Framework 12. Winium 13. WinAppDriver 14. QTWebDriver 15. PyWinAuto
  • 42.
    Key-word driven support ToolKey-word Mark TestComplete Desktop No Bad Unified Functional Testing No Bad Ranorex Yes / No Telerik Test Studio Yes / No Zeenyx AscentialTest Yes Good MS VS Coded UI ; CUIT Yes / No AUTOIT No Bad Sikuli Yes / No Jubula No Bad Robot Framework Yes Good Winium / WinAppDriver ; QTWebDriver Yes / No PyWinAuto Yes / No
  • 43.
    BDD support –“the worst” tool? 1. TestComplete Desktop 2. Unified Functional Testing (UFT) 3. Ranorex 4. Telerik Test Studio 5. Zeenyx AscentialTest 6. MS VS Coded UI 7. CUIT 8. AUTOIT 9. Sikuli 10. Jubula 11. Robot Framework 12. Winium 13. WinAppDriver 14. QTWebDriver 15. PyWinAuto
  • 44.
    BDD support Tool BDDMark TestComplete Desktop No Bad Unified Functional Testing No Bad Ranorex Yes Good Telerik Test Studio Yes Good Zeenyx AscentialTest No Bad MS VS Coded UI ; CUIT Yes Good AUTOIT No Bad Sikuli Yes Good Jubula No Bad Robot Framework Yes / No Winium / WinAppDriver ; QTWebDriver Yes Good PyWinAuto Yes Good
  • 45.
    CI support –“the worst” tool? 1. TestComplete Desktop 2. Unified Functional Testing (UFT) 3. Ranorex 4. Telerik Test Studio 5. Zeenyx AscentialTest 6. MS VS Coded UI 7. CUIT 8. AUTOIT 9. Sikuli 10. Jubula 11. Robot Framework 12. Winium 13. WinAppDriver 14. QTWebDriver 15. PyWinAuto
  • 46.
    CI support Tool CIMark TestComplete Desktop Automated Build Studio Unified Functional Testing Jenkins plugin Ranorex Jenkins Telerik Test Studio Bamboo Zeenyx AscentialTest Test Execution Management MS VS Coded UI ; CUIT Any Good AUTOIT - / Any Sikuli - / Any Java-compatible Jubula No Bad Robot Framework Jenkins plugin Winium / WinAppDriver ; QTWebDriver Any Good PyWinAuto Any Good
  • 47.
  • 48.
  • 49.
    Approximate complexity ofauto-test development Tool Development Mark TestComplete Desktop ~3h Unified Functional Testing ~3h Ranorex ~2h Good Telerik Test Studio ~2h Good Zeenyx AscentialTest ~2h Good MS VS Coded UI ; CUIT ~3h ; 2h ; Good AUTOIT ~1h Good Sikuli ~2h Good Jubula ~2h Good Robot Framework ~4h Winium / WinAppDriver ; QTWebDriver ~3h / 6h -> 2h / Bad -> Good PyWinAuto ~1h Good
  • 50.
    Approximate complexity ofauto-test support (per year) Tool Support Mark TestComplete Desktop ~3h Bad Unified Functional Testing ~3h Bad Ranorex ~2h Good Telerik Test Studio ~2h Good Zeenyx AscentialTest ~3h Bad MS VS Coded UI ; CUIT ~2h ; 1h Good AUTOIT ~4h Bad Sikuli ~5h Bad Jubula ~2h Good Robot Framework ~1h Good Winium / WinAppDriver ; QTWebDriver ~2h / 10h -> 1h Good / Bad -> Good PyWinAuto ~2h Good
  • 51.
    Approximate “entrance” level– “the best” tool? 1. TestComplete Desktop 2. Unified Functional Testing (UFT) 3. Ranorex 4. Telerik Test Studio 5. Zeenyx AscentialTest 6. MS VS Coded UI 7. CUIT 8. AUTOIT 9. Sikuli 10.Jubula 11.Robot Framework 12.Winium 13.WinAppDriver 14.QTWebDriver 15.PyWinAuto
  • 52.
    Approximate “entrance” level ToolLevel TestComplete Desktop High Unified Functional Testing High Ranorex Telerik Test Studio Zeenyx AscentialTest MS VS Coded UI ; CUIT High AUTOIT Low Sikuli Low Jubula Robot Framework High Winium / WinAppDriver ; QTWebDriver High -> PyWinAuto
  • 53.
    Required “technical skills”level – “the best” tool? 1. TestComplete Desktop 2. Unified Functional Testing (UFT) 3. Ranorex 4. Telerik Test Studio 5. Zeenyx AscentialTest 6. MS VS Coded UI 7. CUIT 8. AUTOIT 9. Sikuli 10. Jubula 11. Robot Framework 12. Winium 13. WinAppDriver 14. QTWebDriver 15. PyWinAuto
  • 54.
    Required “technical skills”level Tool Level TestComplete Desktop Unified Functional Testing Ranorex Telerik Test Studio Zeenyx AscentialTest Low MS VS Coded UI ; CUIT High ; AUTOIT Low Sikuli Low Jubula Low Robot Framework High Winium / WinAppDriver ; QTWebDriver High -> PyWinAuto Low
  • 55.
    Test readability –“the worst” tool? 1. TestComplete Desktop 2. Unified Functional Testing (UFT) 3. Ranorex 4. Telerik Test Studio 5. Zeenyx AscentialTest 6. MS VS Coded UI 7. CUIT 8. AUTOIT 9. Sikuli 10. Jubula 11. Robot Framework 12. Winium 13. WinAppDriver 14. QTWebDriver 15. PyWinAuto
  • 56.
    Test readability Tool Level TestCompleteDesktop Unified Functional Testing Ranorex Telerik Test Studio Zeenyx AscentialTest High MS VS Coded UI ; CUIT AUTOIT Low Sikuli High Jubula High Robot Framework - > High Winium / WinAppDriver ; QTWebDriver - > High PyWinAuto High
  • 57.
    How fast testsrun – “the best” tool? 1. TestComplete Desktop 2. Unified Functional Testing (UFT) 3. Ranorex 4. Telerik Test Studio 5. Zeenyx AscentialTest 6. MS VS Coded UI 7. CUIT 8. AUTOIT 9. Sikuli 10. Jubula 11. Robot Framework 12. Winium 13. WinAppDriver 14. QTWebDriver 15. PyWinAuto
  • 58.
    How fast testsrun Tool Level TestComplete Desktop Bad Unified Functional Testing Bad Ranorex Telerik Test Studio Zeenyx AscentialTest MS VS Coded UI ; CUIT Good AUTOIT Good Sikuli Bad Jubula Bad Robot Framework Good Winium / WinAppDriver ; QTWebDriver Good PyWinAuto Good
  • 59.
    Ability to re-use"Business-Logic" layer Tool “BDD” Mark TestComplete Desktop No Bad Unified Functional Testing No Bad Ranorex Yes Good Telerik Test Studio Yes Good Zeenyx AscentialTest No Bad MS VS Coded UI ; CUIT Yes Good AUTOIT No Bad Sikuli Yes Good Jubula No Bad Robot Framework Yes Good Winium / WinAppDriver ; QTWebDriver Yes Good PyWinAuto Yes Good
  • 60.
  • 61.
  • 62.
    How to usescores 
  • 63.
    Stakeholders oriented score ToolScore TestComplete Desktop -2 Unified Functional Testing -2 Ranorex +3 Telerik Test Studio +3 Zeenyx AscentialTest +1 MS VS Coded UI ; CUIT +1 AUTOIT +1 Sikuli +3 Jubula +2 Robot Framework +2 Winium / WinAppDriver ; QTWebDriver +2 PyWinAuto +6
  • 64.
    Mixed score Tool Score TestCompleteDesktop +1 Unified Functional Testing +1 Ranorex +1 Telerik Test Studio 0 Zeenyx AscentialTest -1 MS VS Coded UI ; CUIT -1 AUTOIT +1 Sikuli +4 Jubula +1 Robot Framework +2 Winium / WinAppDriver ; QTWebDriver -2 PyWinAuto -1
  • 65.
    Tech stuff orientedscore Tool Score TestComplete Desktop +2 Unified Functional Testing 0 Ranorex +4 Telerik Test Studio +4 Zeenyx AscentialTest -1 MS VS Coded UI ; CUIT +4 AUTOIT -6 Sikuli +4 Jubula +1 Robot Framework +4 Winium / WinAppDriver ; QTWebDriver +6 PyWinAuto +5
  • 66.
    Summarized score Tool Stub TestCompleteDesktop +1 Unified Functional Testing -1 Ranorex +8 Telerik Test Studio +7 Zeenyx AscentialTest -1 MS VS Coded UI ; CUIT +4 AUTOIT -4 Sikuli +11 Jubula +4 Robot Framework +8 Winium / WinAppDriver ; QTWebDriver +6 PyWinAuto +10
  • 67.
    How to defineproper tool based on selected criteria's
  • 68.
    How to 1. linkinformation from presentation to QA Automation metrics 2. link information from presentation to Project Health Check 3. link information from presentation to QA Automation ROI
  • 69.
  • 70.
    Tools overview structure 1.Pros 2. Cons 3. What kind of project / product / problem / situation certain tools could be used for!
  • 71.
    Example of toolusage structure 1. Plus several examples of each tool usage — Example structure: • Values: — Value the individual — Act as a team — Strive for excellence — Focus on customer — Act with integrity • Prisms: — Technology — Delivery — Leadership
  • 72.
  • 73.
  • 74.
  • 75.
    Test Complete Desktop 1.Pros • Low entrance level • High level of test scripts’ flexibility • Huge knowledge base (at about MSDN level) • Wide choice of script languages which look like common languages 2. Cons • Very expensive license • Very specific own script languages
  • 76.
    Unified functional testing 1.Pros • Low “entrance” level • High level of test scripts’ flexibility • Good tech support 2. Cons • Strict integration with other HP solutions • Very specific own DSL
  • 77.
    Ranorex 1. Pros • Low“entrance” level • Script tests are written on common languages (C#, VB,Net) • Good tech support 2. Cons • Paid license
  • 78.
    Telerik Test Studio(Desktop) 1. Pros • Low “entrance” level • Great parameterization of Keyword tests • DDT support using common formats (CSV, XLS, DB) • Converting tests to common languages (C#, VB.NET) 2. Cons • Only WPF-applications
  • 79.
    Zeenyx 1. Pros • Supportingcomplex logic • Great organization of DDT • Using standard .Net libraries support 2. Cons • Need time to learn how to use • Specific own DSL
  • 80.
    MS VS CodedUI 1. Pros • “Native” for Windows • Supports a huge set of UI technologies • Generated UI Map • Ready to go infrastructure • Good documentation and support 2. Cons • License cost • Relatively “low level” API
  • 81.
    MS VS CodedUI + CUIT 1. Pros • The same as for MS VS Coded UI • Elegant “High level” API 2. Cons • The same as for MS VS Coded UI
  • 82.
    AutoIT 1. Pros • Easy •Universal • Free 2. Cons • There is no ready-to-use verification instruments • Test = exe file • There is no ready-to-use reports
  • 83.
    Sikuli 1. Pros • IDEis easy to learn and use • Standard actions pack • Supports an ability to write tests using common languages (Java, Python) • Supports an ability to work on different platforms and with any applications • Free 2. Cons • Low test’s reliability • Slow tests work • No ability to work with texts • Complicated to support tests
  • 84.
    Jubula 1. Pros • IDEis easy to use • Supports an ability to work on requirement base • Integrated DB for storing test data and results • Free 2. Cons • No flexibility which is ingrain to script tests • No CI support
  • 85.
    Robot Framework 1. Pros •Their own not complex and easy-to-read Keyword-based language • Plugins for different IDE’s • Work with different Oss • Different programming languages support • Tools for creating user-own libraries • Free 2. Cons • High entrance level
  • 86.
    Winium 1. Pros • Familiarsyntax and API • Supports all the languages that are supported by Selenium WebDriver • Free 2. Cons • “Immature” testing tool • Incomplete way of locating elements • A lack of documentation
  • 87.
    WinAppDriver 1. Pros • Familiarsyntax and API • “Native” for Windows • Free 2. Cons • “Immature” testing tool • Complicated (in special case usage) • A lack of documentation
  • 88.
    QTWebDriver 1. Pros • Familiarsyntax and API • QT Applications oriented / “Native” (unique tool) • Free 2. Cons • “Immature” testing tool • Complicated (in special case usage) • A lack of documentation
  • 89.
    PyWinAuto 1. Pros • Extremelysimple to use • Easy to support • Free 2. Cons • Do not support all popular UI technologies • CPython only
  • 90.
    Define a Trend!Is it possible ..?
  • 91.
    Trend 1. There isa potential leader for Desktop Automation
  • 92.
    Why so? “Scientific” technicalprove of Trend • Hegel’s dialectics • Bifurcation mathematical apparat (Bifurcation Theory) • Sedov’s law of hierarchical compensation • Pannov-Snuks Vertical • Big History
  • 93.
    Why so? Non-technical scientificprove of Trend • Peter Drucker “Management. Challenges for the 21st Century” Note: It’s a topic of the whole big conversation, and I’m sure we’re going to get back to it, but not today…
  • 94.
    How to 1. usethis presentation on different project phases 2. use this presentation based on main project roles
  • 95.
    What’s next (justa possible way) • Shu 1. Use Presentation 1. Please, follow recommendations a) “How to select proper criteria's for your project” b) “How to define proper tool based on selected criteria's” c) “How to link information from presentation to QA Automation metrics” d) “How to link information from presentation to Project Health Check” e) “How to link information from presentation to QA Automation ROI” f) “How to use this presentation on different project phases” g) “how to use this presentation based on main project roles”
  • 96.
    What’s next • Ha 1.Update a set of criteria's 2. Update a set of tools 3. Update Presentation 4. Read “Scientific” prove of Trend
  • 97.
    What’s next • Ri 1.Re-Read “Scientific” prove of Trend 2. Update a set of criteria's 3. Update a set of tools 4. Update Presentation 5. Predict the “Trend” 6. Manage the “Trend”
  • 98.
    Next iteration  •Move from static (Presentation) to dynamic (Application) • For example, “https://telescope.epam.com”
  • 99.