This is a presentation given at the Hangzhou Scrum Forum 2009, sponsored by Perficient, China. The topic is how to incorporate automated functional testing into an agile project, and also some best practices, tips, and warnings.
www.perficient.com
A test automation framework defines an organization's way of doing things. It is a set of assumptions, concepts and tools that provide support for automated software testing.
What are the Key drivers for automation? What are the Challenges in Agile automation and How to deal with them? How to automate? Who will automate? Which tool to select? Commercial or open source? What to automate? Which features? Here is what our experience says
This is a presentation given at the Hangzhou Scrum Forum 2009, sponsored by Perficient, China. The topic is how to incorporate automated functional testing into an agile project, and also some best practices, tips, and warnings.
www.perficient.com
A test automation framework defines an organization's way of doing things. It is a set of assumptions, concepts and tools that provide support for automated software testing.
What are the Key drivers for automation? What are the Challenges in Agile automation and How to deal with them? How to automate? Who will automate? Which tool to select? Commercial or open source? What to automate? Which features? Here is what our experience says
This is my complete introductory course for Software Test Automation.If you need full training that includes different automation tools (Selenium, J-Meter, Burp, SOAP UI etc), feel free to contact me by email (amraldo@hotmail.com) or by mobile (+201223600207).
Building a Test Automation Strategy for SuccessLee Barnes
Choosing an appropriate tool and building the right framework are typically thought of as the main challenges in implementing successful test automation. However, long term success requires that other key questions must be answered including:
- What are our objectives?
- How should we be organized?
- Will our processes need to change?
- Will our test environment support test automation?
- What skills will we need?
- How and when should we implement?
In this workshop, Lee will discuss how to assess your test automation readiness and build a strategy for long term success. You will interactively walk through the assessment process and build a test automation strategy based on input from the group. Attend this workshop and you will take away a blue print and best practices for building an effective test automation strategy in your organization.
• Understand the key aspects of a successful test automation function
• Learn how to assess your test automation readiness
• Develop a test automation strategy specific to your organization
Test Automation
Test automation is the use of test automation software like Selenium or self-developed testware to execute test cases.
Test automation is mostly used to automate repetitive testing tasks in a formalized way. It is also used to execute tests that would be difficult to perform manually like performance testing.
There are many advantages of test automation that are mostly related to the repeatability of the tests and the speed of test execution. There are a lot of commercial and open source tools available which can be grouped into two main categories; Code-Driven and Graphical User Interface Testing. Thus the key success factor in test automation is selecting the right tool and have a specialized test automation team.
It is to use test automation tools by considering ROI (return on investment). Otherwise it is quite easy to waste big amount of energy, commitment and definitely money.
With more than 500 clients, Keytorc is the leading software testing company in EMEA region that have competencies of automating any kind of software in diverse industries.
For more information about test automation tools and Keytorc’s test automation service you can contact with our performance test engineers: www.keytorc.com or blogs.keytorc.com
Test Otomasyonu:
Test otomasyonu özellikle tekrarlayan ve manuel olarak yapılması zor olan testlerin Selenium gibi test otomasyon araçları kullanılarak ya da özel olarak geliştirilen test otomasyon scriptleri ile yapılmasıdır. Bu bakımdan:
- Regresyon testleri
- Performans testleri
- Yük ve Stres testleri
- Test yönetimi
test otomasyonuna en uygun test tipleri ve aktiviteleridir.
Test otomasyonunun en büyük faydaları:
- testlerin hıznının artırılması
- testlerin kapsamının artırılması
- testlerin doğruluğunun artırılması
- testlerin raporlama kalitesinin artırılmasıdır.
Test otomasyonu doğru araç seçimi yapılmaması, ya da test otomasyonunu bilen uzman bir ekip tarafından yapılmaması durumunda faydadan çok zarar getirmektedir.
EMEA bölgesindeki lider yazılım test firması olan Keytorc’un test otomasyon ekibiyle iletişime geçmek için: www.keytorc.com ya da blogs.keytorc.com
Using Selenium for Automated testing - basic level: short introduction into the selectors and basic methods used in writing a simple script with Selenium Webdriver.
A brief introduction to test automation covering different automation approaches, when to automate and by whom, commercial vs. open source tools, testability, and so on.
Presented by,
Ms. Anjali K G
Quality Assurance Engineer, Livares Technologies
Manual Testing: Process of testing an application
manually for defects. It requires a tester to play the role
of an end user
Automation Testing: It is technique by using special
software to control the execution of tests and the
comparison of actual results with predicted results
Testing tools Ex: QTP, Winrunner, Selenium etc
Test Automation Best Practices (with SOA test approach)Leonard Fingerman
Today we hear a lot of buzz about the latest & greatest test automation tools like Selenium, Rational Functional Tester or HP LoadRunner but to make your test automation effort successful it might take more than just having the right tool. This presentation will try to uncover major pitfalls typically involved with test automation efforts. It will provide guidance on successful strategy as well as differences among third-generation frameworks like keyword-driven, data-driven and hybrid. It will also cover various aspects of SOA test automation
Gauge is a lightweight open-source, cross-platform test automation tool from ThoughtWorks which provides the ability to author test cases in the business language.
By: Harmeet Singh & Vivek Mahajan
This is my complete introductory course for Software Test Automation.If you need full training that includes different automation tools (Selenium, J-Meter, Burp, SOAP UI etc), feel free to contact me by email (amraldo@hotmail.com) or by mobile (+201223600207).
Building a Test Automation Strategy for SuccessLee Barnes
Choosing an appropriate tool and building the right framework are typically thought of as the main challenges in implementing successful test automation. However, long term success requires that other key questions must be answered including:
- What are our objectives?
- How should we be organized?
- Will our processes need to change?
- Will our test environment support test automation?
- What skills will we need?
- How and when should we implement?
In this workshop, Lee will discuss how to assess your test automation readiness and build a strategy for long term success. You will interactively walk through the assessment process and build a test automation strategy based on input from the group. Attend this workshop and you will take away a blue print and best practices for building an effective test automation strategy in your organization.
• Understand the key aspects of a successful test automation function
• Learn how to assess your test automation readiness
• Develop a test automation strategy specific to your organization
Test Automation
Test automation is the use of test automation software like Selenium or self-developed testware to execute test cases.
Test automation is mostly used to automate repetitive testing tasks in a formalized way. It is also used to execute tests that would be difficult to perform manually like performance testing.
There are many advantages of test automation that are mostly related to the repeatability of the tests and the speed of test execution. There are a lot of commercial and open source tools available which can be grouped into two main categories; Code-Driven and Graphical User Interface Testing. Thus the key success factor in test automation is selecting the right tool and have a specialized test automation team.
It is to use test automation tools by considering ROI (return on investment). Otherwise it is quite easy to waste big amount of energy, commitment and definitely money.
With more than 500 clients, Keytorc is the leading software testing company in EMEA region that have competencies of automating any kind of software in diverse industries.
For more information about test automation tools and Keytorc’s test automation service you can contact with our performance test engineers: www.keytorc.com or blogs.keytorc.com
Test Otomasyonu:
Test otomasyonu özellikle tekrarlayan ve manuel olarak yapılması zor olan testlerin Selenium gibi test otomasyon araçları kullanılarak ya da özel olarak geliştirilen test otomasyon scriptleri ile yapılmasıdır. Bu bakımdan:
- Regresyon testleri
- Performans testleri
- Yük ve Stres testleri
- Test yönetimi
test otomasyonuna en uygun test tipleri ve aktiviteleridir.
Test otomasyonunun en büyük faydaları:
- testlerin hıznının artırılması
- testlerin kapsamının artırılması
- testlerin doğruluğunun artırılması
- testlerin raporlama kalitesinin artırılmasıdır.
Test otomasyonu doğru araç seçimi yapılmaması, ya da test otomasyonunu bilen uzman bir ekip tarafından yapılmaması durumunda faydadan çok zarar getirmektedir.
EMEA bölgesindeki lider yazılım test firması olan Keytorc’un test otomasyon ekibiyle iletişime geçmek için: www.keytorc.com ya da blogs.keytorc.com
Using Selenium for Automated testing - basic level: short introduction into the selectors and basic methods used in writing a simple script with Selenium Webdriver.
A brief introduction to test automation covering different automation approaches, when to automate and by whom, commercial vs. open source tools, testability, and so on.
Presented by,
Ms. Anjali K G
Quality Assurance Engineer, Livares Technologies
Manual Testing: Process of testing an application
manually for defects. It requires a tester to play the role
of an end user
Automation Testing: It is technique by using special
software to control the execution of tests and the
comparison of actual results with predicted results
Testing tools Ex: QTP, Winrunner, Selenium etc
Test Automation Best Practices (with SOA test approach)Leonard Fingerman
Today we hear a lot of buzz about the latest & greatest test automation tools like Selenium, Rational Functional Tester or HP LoadRunner but to make your test automation effort successful it might take more than just having the right tool. This presentation will try to uncover major pitfalls typically involved with test automation efforts. It will provide guidance on successful strategy as well as differences among third-generation frameworks like keyword-driven, data-driven and hybrid. It will also cover various aspects of SOA test automation
Gauge is a lightweight open-source, cross-platform test automation tool from ThoughtWorks which provides the ability to author test cases in the business language.
By: Harmeet Singh & Vivek Mahajan
Create the Future - Innovations in TestingAnand Bagmar
My keynote talk delivered in vodQA Pune, Innovations in Testing, on Sat, 6th June 2015, at ThoughtWorks, Pune.
Video available here: https://www.youtube.com/watch?v=EJcaUZYGDic
Arjuna - Reinventing the Test Automation WheelsRahul Verma
Arjuna is a new generation free test automation engine created by Rahul Verma. It has been coded from scratch, as some would say - it is a reinvention of the test automation wheel. When was a reinvention a bad thing? A wise man once said whatever had to be created has already been created, we just discover it. Arjuna is specifically geared towards today's complex test automation needs while not forgetting the strong foundations laid by its predecessors and teacher in the form of JUnit and TestNG. This presentation happens to be the first formal presentation and hence the official launch of Arjuna by its creator - Rahul Verma.
Bhumika S, Anand Bagmar (Thoughtworks)
How many times do we test the same things at multiple layers, multiple levels, adding time to the build process and testing cycle, delaying the feedback?
We know what to test and how to test, but what is the right place to test it?
In this workshop, we will demonstrate how we, as QA’s, can identify which tests can be classified as unit tests, integration tests, and functional tests. Using a case study, we will see how each component can be tested as part of unit testing; the integration of different parts and the functioning of a software system as a whole, and how functional tests fit into this big picture. We will then bring all these tests together to understand and build the testing pyramid and how it enables us to build the right testing framework with fewer Selenium, i.e., functional tests.
For enterprise projects, maintaining automation tests suite is always a challenge.
* Detailed walkthru of automation pyramid containing different types of specification driven tests layer.
* Framework built using RSpec, Spinach, Cucumber, webkit and Webdriver with examples
* Process of evolving and building specification tests
* Tips to maintain test suite and minimize test run durations
From http://wiki.directi.com/x/AgAa - This is a 24 slide internal presentation covering virtues of Automated Testing vs Manual Testing. Inkeeping with our agile adoption this presentation covers various advantages (11 to be specific) obtained in using TDD and Automated Testing as opposed to Manual Testing
This talk aims to summarize the typical challenges one encounters in testing mobile applications. At the ThoughtWorks Pune office we have developed multiple mobile applications across various platforms (mobile web, hybrid apps, native apps, apps for tablets etc.). In this talk we will bring together lessons learnt around mobile testing. This talk was done by Vikrant Chauhan and Dubinsky De Soares
Test Automation - Principles and PracticesAnand Bagmar
Slides from my webinar for Sri Lanka Testing Community on - "Test Automation - Principles & Practices".
Details about the webinar can be found from my blog - http://essenceoftesting.blogspot.com
QA with Microsoft Test Manager and Lab ManagementRofiqi Setiawan
Plan, manage, and execute tests with Microsoft Test Manager and Lab Management in Visual Studio 2013 which will make it easier to conduct manual and automated testing across a variety of environments. This presentation covers the new exploratory testing approach offered by Microsoft Test Manager; the simplified setup and administration of Lab Management environments; and some of the other fit-and-finish features across the testing scenario.
Automated Software Testing Framework Training by Quontra SolutionsQuontra Solutions
Learn through Experience -- We differentiate our training and development program by delivering Role-Based training instead of Product-based training. Ultimately, our goal is to deliver the best IT Training to our clients.
In this training, attendees learn:
Introduction to Automation
• What is automation
• Advantages of automation & Disadvantages of automation
• Different types of Automation Tools
• What to automate in projects
• When to start automation. Scope for automation testing in projects
• About open-source automation tools
Introduction to Selenium
• What is selenium
• Why selenium
• Advantage and Disadvantages of selenium
Selenium components
• Selenium IDE
• Selenium RC
• Selenium WebDriver
• Selenium Grid
Selenium IDE
• Introduction to IDE
• IDE Installation
• Installation and uses of Firepath, Firebug & Debug bar
• Property & value of elements
• Selenium commands
• Assertions & Verification
• Running, pausing and debugging script
• Disadvantages of selenium IDE
• How to convert selenium IDE Scripts into other languages
Locators
• Tools to identify elements/objects
• Firebug
• IE Developer tools
• Google Chrome Developer tools
• Locating elements by ID
• Finding elements by name
• Finding elements by link text
• Finding elements by XPath
• Finding Elements by using CSS
• Summary
Selenium RC
• What is selenium RC
• Advantages of RC, Architecture
• What is Eclipse/IntelliJ, Selenium RC configure with Eclipse/IntelliJ
• Creating, running & debugging RC scripts
Java Concepts
• Introduction to OOPs concepts and Java
• Installation: Java, Eclipse/IntelliJ, selenium, TestNg/JUnit
• operators in java
• Data types in java
• Conditional statements in java
• Looping statements in java
• Output statements in java
• Classes & Objects
• Collection Framework
• Regular Expressions
• Exception Handling
• Packages, Access Specifiers /Modifiers
• String handling
• Log4J for logging
Selenium Web Driver with Java
• Introduction to WebDriver
• Advantages
• Different between RC and WebDriver
• Selenium WebDriver- commands
• Generate scripts in Eclipse/IntelliJ. Run Test Scripts.
• Debugging Test Script
• Database Connections
• Assertions, validations
• Working with Excel
• Pass the data from Excel
• Working with multiple browser
• Window Handling, Alert/confirm & Popup Handling
• Mouse events
• Wait mechanism
• Rich Web Handling: Calendar handing, Auto suggest, Ajax, browser forward/back navigation, keyboard events, certificate handling, event listeners
TestNg/JUnit Framework
• What is TestNg/JUnit
• Integrate the Selenium Scripts and Run from TestNg/JUnit
• Reporting Results and Analysis
• Run Scripts from multiple programs
• Parallel running using TestNg/JUnit
Automation Framework development in Agile testing
• Introduction to Frame W
SAP Test automation - fully automatic test of complex business processes incl...Tobias Trapp
We take the wording "automation" very seriously. Our version of test automation is indeed fully automatic. No one has to modify test data or any other input parameter in order to conduct regression tests. We support different kinds of input interfaces - Dialogues, IDOC-interface, Batch processing, files are generated and submitted all of it automatically. Even the checks in order to evaluate if the test was successful are performed automatically.
A recent feature of out automatic test system is the controlled generation of XSF and RDI output files, and the automatic evaluation of the correctness of the file content. Our setup consists of large number of isolated units which can be linked exchanging parameters. The isolated units are linked together through table entries. We have been up and running for some years
We are looking forward to displaying our setup and discuss different aspects of automatic testing with you. In case you don't believe us - we aren't surprised, but please give us the chance to convince you
Lightning Talks by Globant - Automation (This app runs by itself ) Globant
When you add new features to your application a lot of things can happen. Do you believe that the app is able to test itself by using automation? Just imagine testing everything manually due to that change. Do you know how many people will be needed to complete this process? The power of automated testing in the development lifecycle allows us things such as scheduling, and executing tests at any time with a big scope on thousands of mobile devices, websites and multiple browsers simultaneously making sure everything is working as expected.
#DOAW16 - DevOps@work Roma 2016 - Testing your databasesAlessandro Alpi
In these slides we will speak about how to unit test our programmability in SQL Server and how to move from a manual process to an automated one in order to achieve the goals of DevOps
In this presentation, Suman gives a brief on various testing approaches and their usage. He says that he is most interested in data-driven automation testing.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
2. Introduction
Automation – Testing which can be done
programmatically.
Far more efficient than manual testing
More complex than it appears
3. Testing and Automation are Different
Economic Evolvable
Exemplary
Effective
Automated
tests (after
many runs) Manual
test
First run of
automated test
4. Promises of Test Automation
Run existing tests on a new version of a program
Run more tests more often
Perform tests which would be difficult or impossible
to do manually
Better use of resources
5. Promises of Test Automation…
Consistency and repeatability of tests
Reuse of tests
Earlier time to market
Increased confidence
6. Automation and Agile
Development
Team
Create
Application Code
Create Tests
Automate Tests
QA Testers
Automation
Team
Create
Application Code
Create Tests
Automate Tests
Create
Application Code
Create Tests
Automate Tests
Start of
Development
End of
Iteration 1
End of
Iteration 2
End of
Iteration 3
Continuous Integration Continuous Integration Continuous Integration
8. Scripting
Test Script – data and/or instructions with a formal
syntax, used by a test execution automation tool,
typically held in a file.
Writing scripts is much like writing a computer
program.
Reduce amount of scripting.
9. Attributes of a Script Set
Number of Scripts
Size of Scripts
Function
Documentation
Reuse
Structured
Maintenance
10. Automated Comparison
Verification by comparison
Dynamic comparison
Post-execution comparison
Integration of test execution and post-execution
comparison
11. Automated Comparison…
Start test tool,
select and run
test cases
Determine
test case
success or
failure
Run
comparator(s)
Determine
post-execution
comparison
success or
failure
Run test cases
(including
any dynamic
comparisons)
Perform
comparisons
Manual Tasks
Tool Tasks
12. Automated Comparison…
Start test tool,
select and run
test cases
Determine
test case
success or
failure
Determine
post-execution
comparison
success or
failure
Run test cases (including
any dynamic
comparisons) and post-
execution comparisons
Manual Tasks
Tool Tasks
13. Testware Architecture
Testware – all the artifacts required for testing
Architecture – arrangement of all of these artifacts
Test Sets – logical collection of testware artifacts
Test Suite – collection of Test Sets to meet a given test
objective
Testware Library – a repository of the master
versions of all Testware Sets
14. Testware Architecture…
Test Suite
Script Set Test Set Data Set Utility
Set
Scripts
Input
Documentation
Expected
Outcome
Data Utilities
Configuration
Items
Testware
Artifacts
Baseline
15. Automating Pre & Post Processing
Manual Process
Pre & Post Processing
Select/Identify test cases to run
Set up test environment:
• Create test environment
• Load test data
Repeat for each test case:
• Set up test prerequisites
• Execute
• Compare results
• Log results
• Clear up after test case
Clean up test environment:
• Delete unwanted data
• Save important data
Summarize Results
Analyze test failures
Report defects
16. Automating Pre & Post Processing…
Pre-processing tasks – Create, Check, Reorganize,
Convert
Post-processing tasks – Delete, Check, Reorganize,
Convert
Processing at different stages
What should happen after test case execution?
17. Limitations of Automation
Does not replace manual testing
Manual tests find more defects than automated tests
Greater reliance on the quality of the tests
Test automation does not improve effectiveness
Test automation may limit software development
Tools have no imagination
18. Career Opportunities
Test Automation Architect – designs the overall structure
of the automation
Test Automator – responsible for designing, writing,
and maintaining the automation software
Bridge between the Tester and the Tool
Good Programming Skills – SDET
Scripting – Perl, Python, Shell, sed, AWK etc
Debugging and Analysis
19. References
Software Test Automation - Dorothy Graham and
Mark Fewster
Experience of Test Automation - Dorothy Graham
and Mark Fewster
Presentations and White Papers from cigital.com
Editor's Notes
Objective of this presentation is to give you all an introduction to Test Automation, its importance in the context of today's agile methodologies, things involved in automation, its complexity and limitations. If you are a student or early stage of your career, I intend to present a career option in front of you and generate curiosity so that you would further research into it based on the material I have listed at the end of the presentation.
Done programmaticallyFar more efficiently - A mature test automation regime will allow testing at the 'touch of a button' with tests run overnight when machines would otherwise be idle.Automated tests are repeatable, using exactly the same inputs in the same sequencetimeandagain, something that cannot be guaranteedwith manual testing. Automated testing enables even the smallest of maintenance changes to be fully tested with minimal effort.At first glance, it seems easy to automate testing: just buy one of the popular test execution tools, record the manual tests, and play them back whenever you want to. Unfortunately, as those who tried it have discovered, it doesn't work like that in practice.
Before we go into details of Automation, I would like to highlight that Automation is different from Testing.Testing – A Skill. Depends on quality test cases. Test cases has 4 attributes.Effectiveness – Whether or not it finds defects, or at least whether or not it is likely to find defects.Exemplary – An exemplary test case should test more than one thing, thereby reducing the total number of test cases required.Economic – How economical a test case is to perform, analyze, and debug.Evolvable – How much maintenance effort is required on the test case each time the software changes.Automation – Skill, of different kind.Manual vs Automation wrt 4 attributesWhether a test is automated or performed manually affects neither its effectiveness nor how exemplary it is.It doesn't matter how clever you are at automating a test or how well you do it, if the test itself achieves nothing then the end result is a test that achieves nothing faster.Once implemented, an automated test is generally much more economic, the cost of running it being a mere fraction of the effort to perform it manually. However, automated tests generally cost more to create and maintain.Roles of Tester vs Roles of Test AutomatorThe person who builds and maintains the artifacts associated with the use of a test execution tool is the test automator. A test automator may or may not also be a tester; he or she may or may not be a member of a test team. For example, there may be a test team consisting of user testers with business knowledge and no technical software development skills.
Other than efficiency, lets quickly see other benefits of automation:Regression Testing - In an environment where many programs are frequently modified,the effort involved in performing a set of regression tests should be minimal.A clear benefit of automation is the ability to run more tests in less time and therefore to make it possible to run them more often. This will lead to greater confidence in the system.Attempting to perform a full-scale live test of an online system with say200 users may be impossible, but the input from 200 users can be simulated usingautomated tests.Better use of resourcesAutomating menial and boring tasks, such as repeatedly entering the same test inputs, gives greater accuracy as well as improved staff morale, and frees skilled testers to put more effort into designing better test cases to be run.Machines that would otherwise lie idle overnight or at the weekend can be used to run automated tests.
Consistency and repeatability of testsTests that are repeated automatically will be repeated exactly every time. This gives a level of consistency to the tests which is very difficult to achievemanually.The same tests can be executed on different hardware configurations, using different operating systems, or using different databases.This gives a consistency of cross-platform quality for multi-platform products which is virtually impossible to achieve with manual testing.Reuse of tests The effort put into deciding what to test, designing the tests, and building the tests can be distributed over many executions of those tests. Tests which will be reused are worth spending time on to make sure they are reliable.NEED EXAMPLE Once a set of tests has been automated, it can be repeated far more quickly than it would be manually, so the testing elapsed time can be shortened. Knowing that an extensive set of automated tests has run successfully, there can be greater confidence that there won't be any unpleasant surprises when the system is released(providing that the tests being run are good/effective tests!)
As agile development becomes more prevalent, automation becomes more important.Continuous integration is test automation; regression tests are run every day, ifnot more often.The automation also needs to be responsive to change, just as agile development is, so the testware architecture is more critical.Testautomation is successfulin traditional as well as agile development, but agile development cannot succeed without test automation.
Let look at the test activities because these are the activities that we may want to automate:Identify – Determine ‘what’ can be testedCould be done in parallel with the development activityDesign – Determine ‘how’ to testTest case design will produce a numberof tests comprising specific input values, expected outcomes, and any other information needed for the test to run, such as environment prerequisites.Build – Implement test scripts, test inputs, test data and expected outcomes for comparison etcExecute – Execute the test casesCheck – Compare test case outcomes to expected outcomes As shown here, the first two test activities, identify test conditions and design test cases, are mainly intellectual in nature. The last two activities, execute test casesand compare test outcomes, are more clerical in nature. It is the intellectual activities that govern the quality of the test cases. The clerical activities areparticularly labor intensive and are therefore well worth automating. The activities of test execution and comparison are repeated many times, while the activities of identifying test conditions and designing test cases are performed only once (except for rework due to errors in those activities). For example: If a test finds an error in the software If a test fails for an environmental reason such as incorrect test data being used If tests are to be run on different platforms It is in automating the latter test activities where there is most to gain.
A test script is the data and/or instructions with a formal syntax, used by a test execution automation tool, typically held in a file. A test script can implement one or more test cases, navigation, set-up or clear-up procedures, or verification.Test scripts that you produce should be properly engineered. Writing scripts is much like writing a computer program.Although test scripts cannot be done away with altogether, using different scripting techniques can reduce the size and number of scripts and their complexity.One of the benefits of editing and coding scripts is to reduce the amount of scripting necessary to automate a set of test cases. This is achieved in 2 ways:One way is to code relatively small pieces of script that each perform a specific action or task that is common to several test cases. Each test case that needs to perform one of the common actions can then use the same script. The other way to reduce scripting is to insert control structures into the scripts to make the tool repeat sequences of instructions without having to code multiple copies of the instructions.
Number of Scripts - Fewer (Less than one script for each test case)Size of Scripts – Small, with annotation, no more than two pagesFunction - Each script has a clear, single purposeDocumentation - Specific documentation for users and maintainers, clear, succinct and up-to-dateReuse - Many scripts reused by different test casesStructured - Easy to see and understand the structure and therefore to make changes; following good programming practices, well-organized control constructsMaintenance - Easy to maintain; changes to software only require minor changes to a few scripts
Test verification is the process of checking whether or not the software has produced the correct outcome. This is achieved by performing one or more comparisons between an actual outcome of a test and the expected outcome of that test (i.e. the outcome when the software is performing correctly). Some tests require only a single comparison to verify their outcome while other tests may require several comparisons. For example, a test case that has entered new information into a database may require at least two comparisons, one to check that the information is displayed on the screen correctly and the other to check that the information is written to the database successfully.When automating test cases, the expected outcomes have either to be prepared in advance or generated by capturing the actual outcomes of a test run. In the latter case the captured outcomes must be verified manually and saved as the expected outcomes for further runs of the automated tests. This is called reference testing.An automated comparison tool, normally referred to as a 'comparator,' is a computer program that detects differences between two sets of data. For test automation this data is usually the outcome of a test run and the expected outcome.Dynamic Comparison=============Dynamic comparison is the comparison that is performed while a test case is executing. Test execution tools normally include comparator features that are specifically designed for dynamic comparison. Dynamic comparison is perhaps the most popular because it is much better supported by commercial test execution tools, particularly those with capture/replay facilities.Dynamic comparison is best used to check things as they appear on the screen in much the same way as a human tester would do.Dynamic comparison can be used to help program some intelligence into a test case, to make it act differently depending on the output as it occurs. For example, if an unexpected output occurs it may suggest that the test script has become out of step with the software under test, so the test case can be aborted rather than allowed to continue. Letting test cases continue when the expected outcome has not been achieved can be wasteful.More Complex - test cases that use many dynamic comparisons take more effort to create, are more difficult to write correctly (more errors arc likely so more script debugging will be necessary), and will incur a higher maintenance cost.Post-execution comparison================Post-execution comparison is the comparison that is performed after a test case has run. It is mostly used to compare outputs other than those that have been sent to the screen, such as files that have been created and the updated content of a database.Passive---------If we simply look at whatever happens to be available after the test case has been executed, this is a passive approach.Active-------If we intentionally save particular results that we are interested in during a test case, for the express purpose of comparing them afterwards, this is an active approach to post-execution comparison.
when a test case requires one or more post-execution comparisons, it is usually a different tool that performs it. In this situation the test execution tool may not run the post-execution comparator(s) of its own accord so we will have to run the comparator(s) ourselves. Figure 4.1 shows this situation in terms of the manual and automated tasks necessary to complete a set of 'automated' test cases. Figure 4.1 does not look much like efficient automated testing and indeed it is not. It would be nice if the test execution tool were responsible for running the comparator, but unless we tell it to do so, and tell it how to do so, it is not. To make the test execution tool perform the post-execution comparisons we will have to specifically add the necessary instructions to the end of the test script. This can amount to a significant amount of work, particularly if there are a good number of separate comparisons to be performed.
Even when we have added the instructions to perform the post-execution comparison we may not have solved the whole problem. Figure 4.2 shows why. The test execution tool will probably be able to tell us that the test case ran successfully (or not) but it may not tell us anything about the results of the post-execution comparisons. Assessing the results of the post-execution comparison is then a manual task. We have to look in two places to determine the final status of the test case run: the execution tool's log or summary report and the output from the comparator tool(s). In an ideal world the interface between the test execution tool and the post-execution comparators would be seamless, but there is usually a gap that we have to fill ourselves.
Testware is the term we use to describe all of the artifacts required for testing, including documentation, scripts, data, and expected outcomes, and all the artifacts generated by testing, including actual outcomes, difference reports, and summary reports.Architecture is the arrangement of all of these artifacts; that is, where they are stored and used, how they are grouped and referenced, and how they are changed and maintained. Testware:Test Materials – Input, Scripts, Data, Documentation, Expected OutcomeTest ResultsProducts – Actual OutcomeBy Products – Log, Stats, Reports
We divide the test materials into logical sets that we call Test Sets. Each Test Set contains one or more test cases. Normally Test Sets would contain a few tens of test cases but they may contain a few hundred or, at the other extreme, a single test case.A Test Suite is simply a collection of Test Sets and therefore contains all the test materials required to run the test cases contained within the Test Sets. There are two alternative ways of managing the configuration of the test-ware. The method that we favor is for the Testware Sets to be stored in the Testware Library as configuration items (that is, having a version number). The individual testware artifacts that make up the content of each type of set do not have their own version numbers. The effect of this is that whenever anything in the Testware Set is changed, a new version of the Testware Set is created containing the changed artifacts and the unchanged artifacts.