Tools and techniques for automated test execution have been around even before the activity of testing was seen as independent of debugging. And with growing complexity of software, companies have …
Tools and techniques for automated test execution have been around even before the activity of testing was seen as independent of debugging. And with growing complexity of software, companies have embraced test automation as an integral part of their test strategy. This is primarily to augment human testing in order to meet goals ranging from increasing test coverage on greater number of platforms to shorter release cycles. However, it is surprising to see how many of these projects fail. According to some industry estimates, 85% of these projects fail to bring any real ROI.
Along with answering the all important questions of what to automate and when to automate, test managers need to understand the implications of using the various tools, techniques and frameworks. And this choice is largely determined by the mission that the project needs to accomplish. Will only the underlying tool suffice, what would be the limitations? Should you be using a keyword driven framework or should you be designing a domain specific language on top of your tools? Given a fast changing application, which of the many approaches will work better? Which one can be learnt and adopted faster by a large team of human testers? These and many other questions can be answered only by the experience of applying the approaches to real projects.
In this talk, based on our long experience of working on test automation projects, we help test managers understand and answer similar questions. We trace maturity of automated testing from tools to frameworks, and detail out the nuances and pit-falls of the approaches in the wake of a given mission. In addition we show how using the many open source tools test mangers can develop robust yet cost-effective solutions.