Keyword driven automation has done more harm than good because it was almost always terribly implemented. It is half way into building a DSL, and another half way NOT being a Turing Complete dialect. Based on the scenarios, proper DSL can be made which is reliant on formal declarative methods of test verification.
5. Problem – 2 : JSON Verification
Multiple Conformation Scheme
1. Nested Object Path exists
2. Field Exists and of Type
3. Value of Field
4. All of something has to be something
else
5. At least one has to “be” something
8. “Key” is NOT the Word
Keyword Arg1 Arg2
Open ( A ) grid URL
Select ( A ) Filter Starts With
Verify ( V ) Filter ( HOW ? )
Keyword Arg1 Arg2
GET ( A ) JSON URL
Verify Field ( A ) ? ?
Grid Filter Verification
JSON Verification
Lacking
Branching
Functions
Data Driven
Verifications
9. What Do We Know Of Expressions
There is NO Element in the Collection, Such That :
Filter Condition P(.) on that Element is Violated
P := { startsWith() , endsWith(), equals(), … }
•Select
•Map
•Join
•For All
•There Exists
•Find
•For-Iterators
•And
•Or
•Not
•Xor
•Add
•Subtract
•Multiply
•Divide
Arithmetic
Common
Logic
Function &
Functionals
Order Logic
10. A Stich in Time – Declarative Paradigm
[ 1, 2, 3, 4, 5, 6, 7, 8 ] [ 6,7 ]
Let F be the Filter using Predicate P(.) over Collection C.
F is correct if and only if :
1. All element ‘f’ in F has the property P(f) = True AND
2. f is in C AND
3. No element ‘c’ exists in C such that P(c) = True AND
4. c is NOT in F
11. DSL – Verifying Filters
Grid : Data Source
Filter : Predicate (Index Function)
Verify/Apply : Higher Order Function
HIGHER ORDER
FUNCTION
INPUT
FUNCTION
DATA TO
INPUT
Today we are going to start talking about something that almost no one wants to talk about – Efficacy of Automation, and how to improve them.
It can not be done by automating more, but by more initial coding and thereby less coding. The idea of Domain Specific Language will be introduced.
But before we begin, let me introduce myself, started as Embedded System Developer for Biotechnological devices as contractor for Applied Biosystems, joined MSFT in 2004 as SDET. Created first automation framework that was the gold standard in IDC for UI Automation for couple of years. Later moved into Windows Kernel in Application Compatibility.
This process ensured I know Windows core, and related technologies – most importantly CLR and UI Automation core.
Switched industry to join D.E.Shaw as Performance and Automation Test lead, created multiple DSL : ApiUnit, CHAITIN, GECO, nJexl there, streamlined the automation left there as Product Manager.
Joined LinkedIn Bengaluru as Staff Engineer, and from there moved to BayesTree as the first Employee & Partner.
Let’s begin. Automation has become so much deeply ingrained that NO ONE wants to measure anything about it anymore. It is NOT Cool to question it. But you know what? You should. You must.
So, what we need to measure? Cost vs Benefit. What sort of Cost?
Is it reliable? If I run it 10 times, how many times it properly run? Repeat with100 times? That is nX measure, a very standard measure taken when I was in MSFT. And I guess it still is.
Cost reduction, what about it? Did it help at all? How do you know?
This talk will be focusing more on Optimize. How can we write less automation code to do more, and do it less and less. And… don’t’ worry I am not here to sale any tool. I am here for selling the idea and stopping Automation Hell.
Yes, that is actually a thing. Automation hell is where too many people trying to automate too less a thing – say 10 people automated to give 10% of functional coverage.
Lets start thinking about optimization, but first, solve couple of very standard problem.
How do we – test this?
Ah, you say, this is a nice code, and how neatly I am writing verification code!
And, now we have similar, but different problem. Again, how to?
Don’t worry, we can write more code, more and more code to test the code… gazillion lines …. With whole of Indian Population acting as testers…and it won’t be enough. Nah. It won’t be.
Previous slides brings us to the point of this. These are the problems. In fact , these are the ONLY problems. The last point, ensures that there can never be any optimization.
But is that true? The proponent of keyword driven automation makes another point – business driven development and testing, which .. Is next.
Classic case of how a Keyword Driven case can be made on the previous examples. Arguments can be made saying that is not true, no business process is being verified here.
That is a moot argument because, formally speaking , a business process is nothing but a computable function – thereby, making these sub problems homomorphic to the original problem.
Eventually, one has to solve this, and then the question becomes how?
It is one thing to have abstraction, another to implement it correctly.
Turns out, keywords were always misnomer, they are really pointing to the direction of what we will discuss next – expressions and DSLs
Let’s look at one of the problem – filter, carefully.
We can see we can formalize this as a logical statement.
Is it possible to have some kind of library function support to do all of these? By default? Then, by definition, using aid of this library, one can implement any verification!
But before we get ahead of ourselves, let’s understand the pitfall of “imperative” paradigm.
There you go. What went wrong? We missed basic stuff. How? Because we were not thinking about truth and statements, but we were thinking about implementation!
Truth, is relative, and axiomatic in nature. The formal method of proving truth is called theorem proving and is a mechanistic process, therefore, one can simply declare a statement about something, and a system can verify the fact about such a system.
Thus, the idea is to ”declare” what we need to verify, rather than actually implement it. By that standard, the logic, the paradigm, everything is a clear failure.
This is how we want the Automation infra to work. Declarative, but at the same time, making it Turing Complete .
Observe the use of defining aspects of interest, Filter, verification but how filter or verification works is abstracted.
That is one part of the declarative style.
Overall this is a progress – and this is a custom computer language – a domain specific language we dare to say!
We talk about higher order functions – what are they? A higher order function is a function that takes at least one another function as input, and data,
And use the input function by using the data to input to produce output.
One such function is sorting which uses comparator. But, in our case, what about the TEST function itself? What it supposed to do?
Apply a filter function with a parameter
Get the values and check using another validation function that
Value In, Value out , Parameter passed is verified!
Now, we get into another problem. Who evaluates expressions? How do I even know the $VALUE what it evaluates into? What every DSL need, is an embedded expression language.
That, is the next slide.
Because I have developed 4 DSL on JVM and only 1 on CLR, I will talk about Expression languages on JVM.
Left side we have languages which are embedded, and does not claim to become a full language, right side full language implementation In JVM.
Anyone using drools will know MVEL, gradle is groovy. ZoomBA is pretty recent addition to the mix, BayesTree use ZoomBA in many places. I
n LinkedIn I have created this framework called Layman – UI layout tester which employs ZoomBA as the expression language
These languages ensure ALU operations, and functional evaluation when required in a DSL design. JSR 223 standard provides for scriptability for JVM.
The end result, DSL becomes Turing complete – that is, it becomes a generic language where I can do expression, conditional jump.
Given BT use ZoomBA, I will showcase why we use it.
Observe the logic expressions are matching literally line by line against a very SQL ish syntax of the expression language.
In Step 2, we introduce you to currying – partial functions which are being evaluated as expressions from string via automatic eval functionl.
We can well understand the implication of such, massive reduction is code size, maintainability, and testability in the test code itself.
The synthesis of all these concepts when put together – gets into this