Many organizations never achieve the significant benefits that are promised from automated test execution. Surprisingly often, this is not due to technical factors but to management issues. Dot Graham describes the most important management issues you must address for test automation success, and helps you understand and choose the best approaches for your organization—no matter which automation tools you use or your current state of automation. Dot explains how automation affects staffing, who should be responsible for which automation tasks, how managers can best support automation efforts leading to success, and what return on investment means in automated testing and what you can realistically expect. Dot also reviews the key technical issues that can make or break the automation effort. Come away with an example set of automation objectives and measures, and a draft test automation strategy that you can use to plan or improve your own automation.
Many organizations never achieve the significant benefits that are promised from automated test execution. Surprisingly often, this is not due to technical factors but to management issues. Dot Graham describes the most important management issues you must address for test automation success, and helps you understand and choose the best approaches for your organization—no matter which automation tools you use or your current state of automation. Dot explains how automation affects staffing, who should be responsible for which automation tasks, how managers can best support automation efforts leading to success, and what return on investment means in automated testing and what you can realistically expect. Dot also reviews the key technical issues that can make or break the automation effort. Come away with an example set of automation objectives and measures, and a draft test automation strategy that you can use to plan or improve your own automation.
This document summarizes a tutorial on management issues related to test automation. The tutorial covered the following key points:
1. It discussed responsibilities for test automation, suggesting testers design tests and select tests for automation, while automators implement automated tests at the request of testers.
2. It emphasized starting with a pilot automation project to work out the best processes for an organization and gain experience before a full rollout. Lessons from example pilot projects were presented.
3. Objectives for test automation efforts were discussed. Good objectives focus on effectiveness rather than just efficiency, such as ensuring repeatability of regression tests. A test automation objectives exercise was included to evaluate different potential objectives.
4. Return on
The document discusses planning and managing test automation. It covers responsibilities of testers and automators, the importance of a pilot project to establish processes and standards, and setting objectives for test automation efforts. Good objectives include demonstrating value through a pilot, gaining experience with a tool, and establishing internal standards and conventions. Measures like maintenance effort and the number of defects found are also discussed for assessing automation efforts.
Successful Test Automation: A Manager’s ViewTechWell
Many organizations invest substantial time and effort in test automation but do not achieve the significant returns they expected. Some blame the tool they used; others conclude test automation just doesn't work in their situation. The truth, however, is often very different. These organizations are typically doing many of the right things but they are not addressing key issues that are vital to long term test automation success. Describing the most important issues that you must address, Mark Fewster helps you understand and choose the best approaches for your organization—no matter which automation tools you use. We’ll discuss both management issues—responsibilities, automation objectives, and return on investment—and technical issues—testware architecture, pre- and post-processing, and automated comparison techniques. If you are involved with managing test automation and need to understand the key issues in making test automation successful, join Mark for this enlightening tutorial.
Test Automation Patterns: Issues and SolutionsTechWell
Automating system level test execution can result in many problems. It is surprising to find that many people encounter the same problems, yet they are not aware of common solutions that have worked well for others. These problem/solution pairs are called “patterns.” Seretta Gamba recognized the commonality of these test automation issues and their solutions and, together with Dorothy Graham, has organized them into Test Automation Patterns. Although unit test patterns are well known, Seretta and Dorothy’s patterns address more general issues. They cover management, process, design, and execution patterns to help you recognize common test automation issues and show you how to identify appropriate patterns to solve the problems. Issues such as No Previous Automation, High ROI Expectations, and High Test Maintenance Cost are addressed by patterns such as Maintainable Testware, Tool Independence, and Management Support. Laptop required (with USB access). An offline version of the wiki will be available to copy to your laptop from a USB stick to use during the session.
When implementing test automation, many people encounter problems: where to start with automation, high maintenance costs for the automated tests, or unrealistic management expectations. The good news is that solutions to these problems exist and have been effectively used by many. A “pattern” is a general reusable solution to a commonly occurring problem. Patterns have been popular in software development for many years, but they are not commonly recognized in system-level test automation. Dorothy Graham shares a collection of common problems (issues) and their solutions (patterns) which she and others are now developing as a wiki. To help resolve typical issues, Dot gives you a brief guided tour of some patterns—from Maintainable Testware and Domain-Driven Testing to Fail Gracefully and Kill the Zombies. Dot helps you recognize test automation issues and shows you how to identify appropriate patterns to help solve them.
Atmosphere 2016 - Berk Dulger - DevOps Tactical Adoption TheoryPROIDEA
DevOps is a state of art describing each software development step as a repeatable, automatable and deterministic process excluding error-prone human factor first time in the history of software development. The model defines the entire value chance from concept to concrete product. It is an evolutionary end for the models of software development and agile movement. But, there is a problem with the concept; though described easily, implemented a little hard.
DevOps Tactical Adoption Theory tries to make the transition process as smooth as possible. It hypothesis each step towards DevOps maturity should bring a visible business value empowering management and team commitment for the next step. The innovative idea here, it is not required to add the tools/processes to stack from sequential beginning to end, but seeking benefit.
The reason behind the theory is to encourage practitioners to apply each step one-by-one and then having the benefits in projects. Consequently, each step is tested in terms of utility and proved method validity for the further steps. In contrast to previous adoption models, our model indicates concrete activities rather than general statements.
Theory built on the claim that many DevOps transition projects considered problematic, impractical or even unsuccessful causing concept to become a goal more than a technique. Basically, theory consists of different areas of interest describing various actions on a schema.
In the session, it is planned to demonstrate “DevOps Tactical Adoption Theory” with focus on Delivery Pipeline/Testing Practices sectioned "Continuous Testing in DevOps".
Building an AppSec Program From the Ground Up: An Honest Retrospectivejtmelton
An honest retrospective of the last ~2 years building an appsec program from scratch: the good, the bad, and the ugly ... With an eye towards immediate applicability of lessons learned ... And wrapping up with my observations and beliefs about what I would do if I had it to do all over again
Many organizations never achieve the significant benefits that are promised from automated test execution. Surprisingly often, this is not due to technical factors but to management issues. Dot Graham describes the most important management issues you must address for test automation success, and helps you understand and choose the best approaches for your organization—no matter which automation tools you use or your current state of automation. Dot explains how automation affects staffing, who should be responsible for which automation tasks, how managers can best support automation efforts leading to success, and what return on investment means in automated testing and what you can realistically expect. Dot also reviews the key technical issues that can make or break the automation effort. Come away with an example set of automation objectives and measures, and a draft test automation strategy that you can use to plan or improve your own automation.
This document summarizes a tutorial on management issues related to test automation. The tutorial covered the following key points:
1. It discussed responsibilities for test automation, suggesting testers design tests and select tests for automation, while automators implement automated tests at the request of testers.
2. It emphasized starting with a pilot automation project to work out the best processes for an organization and gain experience before a full rollout. Lessons from example pilot projects were presented.
3. Objectives for test automation efforts were discussed. Good objectives focus on effectiveness rather than just efficiency, such as ensuring repeatability of regression tests. A test automation objectives exercise was included to evaluate different potential objectives.
4. Return on
The document discusses planning and managing test automation. It covers responsibilities of testers and automators, the importance of a pilot project to establish processes and standards, and setting objectives for test automation efforts. Good objectives include demonstrating value through a pilot, gaining experience with a tool, and establishing internal standards and conventions. Measures like maintenance effort and the number of defects found are also discussed for assessing automation efforts.
Successful Test Automation: A Manager’s ViewTechWell
Many organizations invest substantial time and effort in test automation but do not achieve the significant returns they expected. Some blame the tool they used; others conclude test automation just doesn't work in their situation. The truth, however, is often very different. These organizations are typically doing many of the right things but they are not addressing key issues that are vital to long term test automation success. Describing the most important issues that you must address, Mark Fewster helps you understand and choose the best approaches for your organization—no matter which automation tools you use. We’ll discuss both management issues—responsibilities, automation objectives, and return on investment—and technical issues—testware architecture, pre- and post-processing, and automated comparison techniques. If you are involved with managing test automation and need to understand the key issues in making test automation successful, join Mark for this enlightening tutorial.
Test Automation Patterns: Issues and SolutionsTechWell
Automating system level test execution can result in many problems. It is surprising to find that many people encounter the same problems, yet they are not aware of common solutions that have worked well for others. These problem/solution pairs are called “patterns.” Seretta Gamba recognized the commonality of these test automation issues and their solutions and, together with Dorothy Graham, has organized them into Test Automation Patterns. Although unit test patterns are well known, Seretta and Dorothy’s patterns address more general issues. They cover management, process, design, and execution patterns to help you recognize common test automation issues and show you how to identify appropriate patterns to solve the problems. Issues such as No Previous Automation, High ROI Expectations, and High Test Maintenance Cost are addressed by patterns such as Maintainable Testware, Tool Independence, and Management Support. Laptop required (with USB access). An offline version of the wiki will be available to copy to your laptop from a USB stick to use during the session.
When implementing test automation, many people encounter problems: where to start with automation, high maintenance costs for the automated tests, or unrealistic management expectations. The good news is that solutions to these problems exist and have been effectively used by many. A “pattern” is a general reusable solution to a commonly occurring problem. Patterns have been popular in software development for many years, but they are not commonly recognized in system-level test automation. Dorothy Graham shares a collection of common problems (issues) and their solutions (patterns) which she and others are now developing as a wiki. To help resolve typical issues, Dot gives you a brief guided tour of some patterns—from Maintainable Testware and Domain-Driven Testing to Fail Gracefully and Kill the Zombies. Dot helps you recognize test automation issues and shows you how to identify appropriate patterns to help solve them.
Atmosphere 2016 - Berk Dulger - DevOps Tactical Adoption TheoryPROIDEA
DevOps is a state of art describing each software development step as a repeatable, automatable and deterministic process excluding error-prone human factor first time in the history of software development. The model defines the entire value chance from concept to concrete product. It is an evolutionary end for the models of software development and agile movement. But, there is a problem with the concept; though described easily, implemented a little hard.
DevOps Tactical Adoption Theory tries to make the transition process as smooth as possible. It hypothesis each step towards DevOps maturity should bring a visible business value empowering management and team commitment for the next step. The innovative idea here, it is not required to add the tools/processes to stack from sequential beginning to end, but seeking benefit.
The reason behind the theory is to encourage practitioners to apply each step one-by-one and then having the benefits in projects. Consequently, each step is tested in terms of utility and proved method validity for the further steps. In contrast to previous adoption models, our model indicates concrete activities rather than general statements.
Theory built on the claim that many DevOps transition projects considered problematic, impractical or even unsuccessful causing concept to become a goal more than a technique. Basically, theory consists of different areas of interest describing various actions on a schema.
In the session, it is planned to demonstrate “DevOps Tactical Adoption Theory” with focus on Delivery Pipeline/Testing Practices sectioned "Continuous Testing in DevOps".
Building an AppSec Program From the Ground Up: An Honest Retrospectivejtmelton
An honest retrospective of the last ~2 years building an appsec program from scratch: the good, the bad, and the ugly ... With an eye towards immediate applicability of lessons learned ... And wrapping up with my observations and beliefs about what I would do if I had it to do all over again
Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Henrik Andersson by Exploratory Testing Champions. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Better Test Designs to Drive Test Automation ExcellenceTechWell
Test execution automation is often seen as a technical challenge-a matter of applying the right technology, tools, and smart programming talent. However, such efforts and projects often fail to meet expectations with results that are difficult to manage and maintain-especially for large and complex systems. Hans Buwalda describes how the choices you make for designing tests can make-or break-a test automation project. Join Hans to discover why good automated tests are not the same as the automation of good manual tests and how to break down tests into modules-building blocks-in which each has a clear scope and purpose. See how to design test cases within each module to reflect that module's scope and nothing more. Hans explains how to tie modules together with a keyword-based test automation framework that separates the automation details from the test itself to enhance maintainability and improve ROI.
The document introduces various agile test tools. It begins by explaining how agile methodologies like Scrum differ from traditional development in requiring testing throughout the process. It then defines terms like test-driven development, acceptance testing, and behavior-driven development. The bulk of the document describes test tools in two categories: those that describe requirements and tests using domain-specific languages, and those for executing tests. Tools covered include RSpec, FIT, FitNesse, Cucumber, Robot Framework, Selenium, and others. Advantages of agile test tools are discussed, along with challenges to adopting new tools and techniques. Links and books for further resources are provided at the end.
- Understand the principles behind the agile approach to software development
- Differentiate between the testing role in agile projects compared with the role of testers in non-agile projects
- Positively contribute as an agile team member focused on testing
- Appreciate the challenges and difficulties associated with the non-testing activities performed in an agile team
- Demonstrate a range of soft skills required by agile team members
CICS TS for z/OS, From Waterfall to Agile using Rational Jazz Technology - no...IBM Danmark
Nigel Hopper discusses CICS's journey from waterfall to agile development. Over the past decade, CICS has gradually adopted agile principles and tooling like Rational Team Concert. Initially, adoption was challenging due to cultural and process issues. However, benefits have included earlier defect detection, improved transparency and collaboration, and reduced post-release work. Recent improvements include direct customer input, portfolio planning, and further agile practices. RTC has enabled a more integrated development environment. While progress has been made, further agile adoption remains ongoing.
Agile Testing – embedding testing into agile software development lifecycle Kari Kakkonen
My presentation on Agile Testing, including a tuning concept and a case study of agile testing choices in a project, held 16 of June, 2014 at a customer internal seminar.
Presentation (animated) on Agilve vs Iterative vs Waterfall models in SDLC.
Detailed comparison across Process, Planning, Execution and Completion.
#Cricket Analogy#
Waterfall (Test Match) vs Iterative (ODI) Format vs Agile (T20)
#Waterfall: Test Match Format - Strategic-Phase by Phase like Innings by Innings.
Game for Specialists, Slow and Steady.
#One Day (ODI) Format : Strategic approach – First10/Middle/Slog overs.
Mix of Specialists and
All-Rounders, Result oriented.
#T20 Format: Lively,Dynamic, Full of Action. Game for All-Rounders. Changes with every over.
Highly Result oriented
System-Level Test Automation: Ensuring a Good StartTechWell
Many organizations invest a lot of effort in test automation at the system level but then have serious problems later on. As a leader, how can you ensure that your new automation efforts will get off to a good start? What can you do to ensure that your automation work provides continuing value? This tutorial covers both “theory” and “practice”. Dot Graham explains the critical issues for getting a good start, and Chris Loder describes his experiences in getting good automation started at a number of companies. The tutorial covers the most important management issues you must address for test automation success, particularly when you are new to automation, and how to choose the best approaches for your organization—no matter which automation tools you use. Focusing on system level testing, Dot and Chris explain how automation affects staffing, who should be responsible for which automation tasks, how managers can best support automation efforts to promote success, what you can realistically expect in benefits and how to report them. They explain—for non-techies—the key technical issues that can make or break your automation effort. Come away with your own clarified automation objectives, and a draft test automation strategy to use to plan your own system-level test automation.
In this chapter, we will introduce you to the
fundamentals of testing:
why testing is needed;
its limitations, objectives, and purpose;
the principles behind testing;
the process that testers follow;
and some of the psychological factors that testers must consider in their work.
Software testing 2012 - A Year in ReviewJohan Hoberg
The document summarizes software testing trends in 2012. Key points include: Google introduced "Testing 2.0" which focuses on risk assessment and reducing probabilities of bugs; good test automation requires caring about results and addressing failed tests; context-driven testing emphasizes a risk-based approach and looking at multiple contexts rather than one school of thought; mindless automation and scripted manual regression testing are not effective at finding bugs; and customer involvement is important for testing in agile projects.
Vipul Kocher - Software Testing, A Framework Based ApproachTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Software Testing, A Framework Based Approach by Vipul Kocher. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Continuous Deployment and Testing Workshop from Better Software WestCory Foy
In this workshop from the 2015 SQE Better Software West conference, Cory Foy details the Continuous Paradigm companies are embracing - including Continuous Integration, Continuous Deployment, and Continuous Testing. This presentation was co-created by Jared Richardson.
The document discusses moving from a "gatekeeper" model of testing, where testing is done separately after development, to a "partner" model where testing is integrated into development and shared responsibility of the team. It provides tips for making this transition, such as fixing problems developers experience with testing, integrating testing into development workflows, and helping testers contribute to other parts of development to become true partners. The overall message is that testing is most effective when it is easy to do and an inherent part of the development process done collaboratively by the entire team.
Unit4 Proof of Correctness, Statistical Tools, Clean Room Process and Quality...Reetesh Gupta
Program testing seeks to show that input values produce acceptable output values but can never prove the absence of errors. Proof of correctness uses formal logic to prove that if input values satisfy constraints, output values will satisfy specific properties. Total quality control is a management framework that links different business functions through information sharing to ensure continuous excellence. It involves applying tools like control charts, histograms, Pareto charts, fishbone diagrams, and scatter diagrams to identify and address quality issues.
The document outlines a test strategy for an agile software project. It discusses testing at each stage: release planning, sprints, a hardening sprint, and release. Key points include writing test cases during planning and sprints, different types of testing done during each phase including unit, integration, feature and system testing, retrospectives to improve, and using metrics like burn downs and defect tracking to enhance predictability. The overall strategy emphasizes testing early and often throughout development in short iterations.
Presentation on Agile Testing Days 2018
Title: Agile Thinking drives digital Innovation
An integrated agile process model for digital innovation
Combination of effective agile methods and efficiency of process structure.
Guideline to implement an agile innovation process in organizations
A software testing practice that follow the principle of agile software development is called Agile Testing.
Agile is an iterative development methodology where requirement evolve through collaboration between the customer and self-organizing teams and agile aligns development with customer need.
Website: https://www.1solutions.biz/
'Growing to a Next Level Test Organisation' by Tim KoomenTEST Huddle
Many organisations start improving their testing by implementing some kind of line organisation for testing (test expertise center, test service center), hereafter called TEC. Although a good starting point for improvements, in practice the TEC is often not much more than a resource pool of testers, possibly supplying certain templates or giving advice to projects.
A next maturity level for a TEC is to grow to a test factory, responsible for delivering pre-agreed test results.From the experiences gathered mostly from a large railroad infrastructure organisation, this presentation shows the path to this next level of test maturity and responsibility.However, this is not a straight path, but a path with ups and downs and many curves, and getting there isn’t easy. It requires change, in organisational processes but, more difficult, also in the way people work, their behavior and their attitude.
In my practice, I follow the principles of the Basic Change Method (from Dutch management guru Ben Tiggelaar). BCM is a combination of the most effective insights from cognitive and behavioral science and focuses on making people change their common behavior by management of both behavior intentions and change situations. Usually change management is mainly focused on end results. But the underestimated factor between change plans and desired results is behavior.
Issues that will be discussed are:
• using the TEC as a lever for test improvement
• envisioning the roadmap
• formulating improvement actions
• (management) commitment
• organising the improvement (team)
• planning the change
• implementing the improvements
• changing behavior
• measuring results.
Agile Testing involves testing in the context of Agile development. It is done continuously and collaboratively by all members of the team throughout the development process, rather than just by QA/testers at the end. This helps ensure high quality, useful software is delivered iteratively.
Even today, to the detriment of agile success, most organizational cultures remain delivery date-driven—resulting in delivery teams that are not focused on creating value for the customer. So how can we redirect stakeholders, the business, and the project team to concentrate on delivering the greatest value rather than simply meeting dates? Pollyanna Pixton describes the tools she has used in collaboration sessions to help all stakeholders and team members begin the process of adopting customer-centric agile methods. These tools include laying out an end-to-end customer journey, forming reusable decision filters to help prioritize backlogs, converting features into actionable user stories, and developing a solid process for making group decisions and communicating those decisions. Pollyanna shares questions that product owners and managers can use to define the problem while making sure they don't solve the problem prematurely. After all, that is the responsibility of the delivery team.
Cause-Effect Graphing: Rigorous Test Case DesignTechWell
A tester’s toolbox today contains a number of test case design techniques—classification trees, pairwise testing, design of experiments-based methods, and combinatorial testing. Each of these methods is supported by automated tools. Tools provide consistency in test case design, which can increase the all-important test coverage in software testing. Cause-effect graphing, another test design technique, is superior from a test coverage perspective, reducing the number of test cases needed to provide excellent coverage. Gary Mogyorodi describes these black box test case design techniques, summarizes the advantages and disadvantages of each technique, and provides a comparison of the features of the tools that support them. Using an example problem, he compares the number of test cases derived and the test coverage obtained using each technique, highlighting the advantages of cause-effect graphing. Join Gary to see what new techniques you might want to add to your toolbox.
Henrik Andersson - Exploratory Testing Champions - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Henrik Andersson by Exploratory Testing Champions. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Better Test Designs to Drive Test Automation ExcellenceTechWell
Test execution automation is often seen as a technical challenge-a matter of applying the right technology, tools, and smart programming talent. However, such efforts and projects often fail to meet expectations with results that are difficult to manage and maintain-especially for large and complex systems. Hans Buwalda describes how the choices you make for designing tests can make-or break-a test automation project. Join Hans to discover why good automated tests are not the same as the automation of good manual tests and how to break down tests into modules-building blocks-in which each has a clear scope and purpose. See how to design test cases within each module to reflect that module's scope and nothing more. Hans explains how to tie modules together with a keyword-based test automation framework that separates the automation details from the test itself to enhance maintainability and improve ROI.
The document introduces various agile test tools. It begins by explaining how agile methodologies like Scrum differ from traditional development in requiring testing throughout the process. It then defines terms like test-driven development, acceptance testing, and behavior-driven development. The bulk of the document describes test tools in two categories: those that describe requirements and tests using domain-specific languages, and those for executing tests. Tools covered include RSpec, FIT, FitNesse, Cucumber, Robot Framework, Selenium, and others. Advantages of agile test tools are discussed, along with challenges to adopting new tools and techniques. Links and books for further resources are provided at the end.
- Understand the principles behind the agile approach to software development
- Differentiate between the testing role in agile projects compared with the role of testers in non-agile projects
- Positively contribute as an agile team member focused on testing
- Appreciate the challenges and difficulties associated with the non-testing activities performed in an agile team
- Demonstrate a range of soft skills required by agile team members
CICS TS for z/OS, From Waterfall to Agile using Rational Jazz Technology - no...IBM Danmark
Nigel Hopper discusses CICS's journey from waterfall to agile development. Over the past decade, CICS has gradually adopted agile principles and tooling like Rational Team Concert. Initially, adoption was challenging due to cultural and process issues. However, benefits have included earlier defect detection, improved transparency and collaboration, and reduced post-release work. Recent improvements include direct customer input, portfolio planning, and further agile practices. RTC has enabled a more integrated development environment. While progress has been made, further agile adoption remains ongoing.
Agile Testing – embedding testing into agile software development lifecycle Kari Kakkonen
My presentation on Agile Testing, including a tuning concept and a case study of agile testing choices in a project, held 16 of June, 2014 at a customer internal seminar.
Presentation (animated) on Agilve vs Iterative vs Waterfall models in SDLC.
Detailed comparison across Process, Planning, Execution and Completion.
#Cricket Analogy#
Waterfall (Test Match) vs Iterative (ODI) Format vs Agile (T20)
#Waterfall: Test Match Format - Strategic-Phase by Phase like Innings by Innings.
Game for Specialists, Slow and Steady.
#One Day (ODI) Format : Strategic approach – First10/Middle/Slog overs.
Mix of Specialists and
All-Rounders, Result oriented.
#T20 Format: Lively,Dynamic, Full of Action. Game for All-Rounders. Changes with every over.
Highly Result oriented
System-Level Test Automation: Ensuring a Good StartTechWell
Many organizations invest a lot of effort in test automation at the system level but then have serious problems later on. As a leader, how can you ensure that your new automation efforts will get off to a good start? What can you do to ensure that your automation work provides continuing value? This tutorial covers both “theory” and “practice”. Dot Graham explains the critical issues for getting a good start, and Chris Loder describes his experiences in getting good automation started at a number of companies. The tutorial covers the most important management issues you must address for test automation success, particularly when you are new to automation, and how to choose the best approaches for your organization—no matter which automation tools you use. Focusing on system level testing, Dot and Chris explain how automation affects staffing, who should be responsible for which automation tasks, how managers can best support automation efforts to promote success, what you can realistically expect in benefits and how to report them. They explain—for non-techies—the key technical issues that can make or break your automation effort. Come away with your own clarified automation objectives, and a draft test automation strategy to use to plan your own system-level test automation.
In this chapter, we will introduce you to the
fundamentals of testing:
why testing is needed;
its limitations, objectives, and purpose;
the principles behind testing;
the process that testers follow;
and some of the psychological factors that testers must consider in their work.
Software testing 2012 - A Year in ReviewJohan Hoberg
The document summarizes software testing trends in 2012. Key points include: Google introduced "Testing 2.0" which focuses on risk assessment and reducing probabilities of bugs; good test automation requires caring about results and addressing failed tests; context-driven testing emphasizes a risk-based approach and looking at multiple contexts rather than one school of thought; mindless automation and scripted manual regression testing are not effective at finding bugs; and customer involvement is important for testing in agile projects.
Vipul Kocher - Software Testing, A Framework Based ApproachTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Software Testing, A Framework Based Approach by Vipul Kocher. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Continuous Deployment and Testing Workshop from Better Software WestCory Foy
In this workshop from the 2015 SQE Better Software West conference, Cory Foy details the Continuous Paradigm companies are embracing - including Continuous Integration, Continuous Deployment, and Continuous Testing. This presentation was co-created by Jared Richardson.
The document discusses moving from a "gatekeeper" model of testing, where testing is done separately after development, to a "partner" model where testing is integrated into development and shared responsibility of the team. It provides tips for making this transition, such as fixing problems developers experience with testing, integrating testing into development workflows, and helping testers contribute to other parts of development to become true partners. The overall message is that testing is most effective when it is easy to do and an inherent part of the development process done collaboratively by the entire team.
Unit4 Proof of Correctness, Statistical Tools, Clean Room Process and Quality...Reetesh Gupta
Program testing seeks to show that input values produce acceptable output values but can never prove the absence of errors. Proof of correctness uses formal logic to prove that if input values satisfy constraints, output values will satisfy specific properties. Total quality control is a management framework that links different business functions through information sharing to ensure continuous excellence. It involves applying tools like control charts, histograms, Pareto charts, fishbone diagrams, and scatter diagrams to identify and address quality issues.
The document outlines a test strategy for an agile software project. It discusses testing at each stage: release planning, sprints, a hardening sprint, and release. Key points include writing test cases during planning and sprints, different types of testing done during each phase including unit, integration, feature and system testing, retrospectives to improve, and using metrics like burn downs and defect tracking to enhance predictability. The overall strategy emphasizes testing early and often throughout development in short iterations.
Presentation on Agile Testing Days 2018
Title: Agile Thinking drives digital Innovation
An integrated agile process model for digital innovation
Combination of effective agile methods and efficiency of process structure.
Guideline to implement an agile innovation process in organizations
A software testing practice that follow the principle of agile software development is called Agile Testing.
Agile is an iterative development methodology where requirement evolve through collaboration between the customer and self-organizing teams and agile aligns development with customer need.
Website: https://www.1solutions.biz/
'Growing to a Next Level Test Organisation' by Tim KoomenTEST Huddle
Many organisations start improving their testing by implementing some kind of line organisation for testing (test expertise center, test service center), hereafter called TEC. Although a good starting point for improvements, in practice the TEC is often not much more than a resource pool of testers, possibly supplying certain templates or giving advice to projects.
A next maturity level for a TEC is to grow to a test factory, responsible for delivering pre-agreed test results.From the experiences gathered mostly from a large railroad infrastructure organisation, this presentation shows the path to this next level of test maturity and responsibility.However, this is not a straight path, but a path with ups and downs and many curves, and getting there isn’t easy. It requires change, in organisational processes but, more difficult, also in the way people work, their behavior and their attitude.
In my practice, I follow the principles of the Basic Change Method (from Dutch management guru Ben Tiggelaar). BCM is a combination of the most effective insights from cognitive and behavioral science and focuses on making people change their common behavior by management of both behavior intentions and change situations. Usually change management is mainly focused on end results. But the underestimated factor between change plans and desired results is behavior.
Issues that will be discussed are:
• using the TEC as a lever for test improvement
• envisioning the roadmap
• formulating improvement actions
• (management) commitment
• organising the improvement (team)
• planning the change
• implementing the improvements
• changing behavior
• measuring results.
Agile Testing involves testing in the context of Agile development. It is done continuously and collaboratively by all members of the team throughout the development process, rather than just by QA/testers at the end. This helps ensure high quality, useful software is delivered iteratively.
Even today, to the detriment of agile success, most organizational cultures remain delivery date-driven—resulting in delivery teams that are not focused on creating value for the customer. So how can we redirect stakeholders, the business, and the project team to concentrate on delivering the greatest value rather than simply meeting dates? Pollyanna Pixton describes the tools she has used in collaboration sessions to help all stakeholders and team members begin the process of adopting customer-centric agile methods. These tools include laying out an end-to-end customer journey, forming reusable decision filters to help prioritize backlogs, converting features into actionable user stories, and developing a solid process for making group decisions and communicating those decisions. Pollyanna shares questions that product owners and managers can use to define the problem while making sure they don't solve the problem prematurely. After all, that is the responsibility of the delivery team.
Cause-Effect Graphing: Rigorous Test Case DesignTechWell
A tester’s toolbox today contains a number of test case design techniques—classification trees, pairwise testing, design of experiments-based methods, and combinatorial testing. Each of these methods is supported by automated tools. Tools provide consistency in test case design, which can increase the all-important test coverage in software testing. Cause-effect graphing, another test design technique, is superior from a test coverage perspective, reducing the number of test cases needed to provide excellent coverage. Gary Mogyorodi describes these black box test case design techniques, summarizes the advantages and disadvantages of each technique, and provides a comparison of the features of the tools that support them. Using an example problem, he compares the number of test cases derived and the test coverage obtained using each technique, highlighting the advantages of cause-effect graphing. Join Gary to see what new techniques you might want to add to your toolbox.
“People are the most important asset of any organization.” Even though we hear that a lot, leaders and managers actually spend very little time focusing on the people side of testing. The skills and makeup of a test team are important and must be managed and cultivated properly. Individuals are very different and will react differently to various situations. Lloyd Roden describes the “tester’s style analysis questionnaire” and four types of testers—the pragmatist, the facilitator, the analyst, and the pioneer. When we recognize and acknowledge individual differences, we can use the individual’s strengths rather than dwell on the weaknesses. Lloyd examines how conflicts arise and how this style analysis questionnaire can help defuse conflicts to bring out the best in teams. Recruiting can be difficult, too. How do we recognize good testers during interviews? Once again, the style analysis can help. Lloyd provides Seven Top Tips for motivating your team to become more productive. Join Lloyd and take back ideas to help you assemble your most effective team.
Creating Great User Experiences: Tips and TechniquesTechWell
Many software people look at creating great user experiences as a black art, something to guess at and hope for the best. It doesn't have to be that way! Jennifer Fraser explores the key ingredients for great user experience (UX) designs and shares the techniques she employs early-and often-during development. Find out how Jennifer fosters communications with users and devs, and works pro-actively to ensure true collaboration among UX designers and the rest of the team. Whether your team employs a formal agile methodology or not, Jennifer asserts that you need an iterative and incremental approach for creating great UX experiences. She shares her toolkit of communication techniques-blue-sky brainstorming sessions, structured conversation, and more-to use with different personality types and describes which types may approach decisions objectively versus empathetically. Leave with examples of UX design methods-personas, use scenarios, and user stories-to get you started on your current and upcoming projects.
Dealing with Estimation, Uncertainty, Risk, and CommitmentTechWell
Software projects are known to have challenges with estimation, uncertainty, risk, and commitment—and the most valuable projects often carry the most risk. Other industries also encounter risk and generate value by understanding and managing that risk effectively. Todd Little explores techniques used in a number of risky businesses—product development, oil and gas exploration, investment banking, medicine, weather forecasting, and gambling—and shares what those industries have done to manage uncertainty. With studies of software development estimations and uncertainties, Todd discusses how software practitioners can learn from a better understanding of uncertainty and its dynamics. In addition, he introduces techniques and approaches to estimation and risk management including utilizing real options and one of its key elements—understanding commitment. Take away a better understanding of the challenges of estimation and what software practitioners can do to better manage estimation, risks, and their commitments.
Make the Cloud Less Cloudy: A Perspective for Software Development TeamsTechWell
With so many technologies branded as “cloud” products, it can be difficult to distinguish good technology from good marketing. The resulting confusion complicates the work of software development teams who are trying not only to architect software effectively but also trying to accelerate building, testing, and delivering software. To cut through this confusion, Bill Wilder defines key cloud terms, compares the different types of clouds, and drills into concrete examples of specific cloud services. Introducing several software architecture concepts and patterns, Bill illustrates how to position applications to run reliably, at high scale (if needed), and with maximum cost efficiency on modern cloud platforms. Specific examples are drawn from the Windows Azure and Amazon cloud platforms, though the concepts are generally applicable. Leave with an understanding of relevant cloud concepts, a better idea of how moving to the “cloud” can impact application architecture, and some practical ideas for exploiting the cloud to improve software development team productivity.
It’s All Fun and Games: Using Play to Improve Tester CreativityTechWell
The number of software test tools keeps expanding, and individual tools are continuously becoming more advanced. However, there is no doubt that a tester’s most important—yet often neglected and underused—tool is the mind. As testers, we need to employ our intelligence, imagination, and creativity to gain information about the system under test. Humans are biologically designed to learn through play, and even as adults we can exploit this and harness the power of play to encourage and drive our creativity. Christin Wiedemann shows how you and your team can employ games and puzzles to practice and enhance cognitive skills that are especially important to testers including critical thinking, pattern recognition, and the ability to quickly process and understand new information. Not only will play make you a better tester but it will also make testing more fun. Learn to think critically and question your testing assumptions.
Are you overwhelmed by the number of mobile devices you need to test? The device market is large and new devices become available almost weekly. Karen Johnson discusses three key challenges to mobile testing—device selection, user interface, and device and application settings—and leads you through each. Learn how to select which devices to test and how to keep up-to-date in the ever-changing mobile market. Need to learn about user interface testing on mobile? Karen reviews mobile UX concepts and design. Wonder what device settings can impact your mobile app testing? Karen reviews common settings you need to consider. In addition to these mobile testing challenges, Karen guides you on how to conduct a competitive analysis of mobile apps. Learning how to conduct a survey of mobile apps and becoming aware of your competitors’ offerings are important to grow your own mobile knowledge.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms—including Goal-Question-Metric—and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.
Designing Self-maintaining UI Tests for Web ApplicationsTechWell
This document discusses the challenges of test automation and proposes some solutions. It notes that products are constantly changing, developers do not always communicate changes, and testers spend significant time fixing broken tests rather than writing new ones. It proposes moving testing earlier in the process, embedding testers with developers, and using automation to prevent broken builds. Moving to more automated and maintainable tests over time can reduce maintenance costs and give testers a better understanding of how software is developed. Key steps include treating test automation as software development, improving presentation layer consistency, and better communication between testers and developers.
Sometimes software testers overvalue the adherence to the collective wisdom embodied in organizational processes and the mechanical execution of tasks. Overly directive procedures work—to a point—projecting an impression of firm, clear control. But do they generate test results that are valuable to our stakeholders? Is there a way to orchestrate everyone’s creative contributions without inviting disorganized confusion? Is there a model that leverages the knowledge and creativity of the people doing the work, yet exerts reliable control in a non-directive way? Griffin Jones shares just such a model, describing its prescriptive versus discretionary parts and its dynamic and adaptive nature. Task activities are classified into types and control preferences. Griffin explores archetypes of control and their associated underlying values. Leave with an understanding of how you can leverage the wisdom and creativity of your people to make your testing more valuable and actionable.
Test Management for Cloud-based ApplicationsTechWell
Because the cloud introduces additional system risks—Internet dependencies, security challenges, performance concerns, and more—you, as a test manager, need to broaden your scope and update your team’s practices and processes. Ruud Teunissen shares a unique approach that directly addresses more than 140 new testing concerns and risks you may encounter in the cloud. Learn how to identify cloud-specific requirements and the risks that can ensue from those requirements. Then, explore the test strategies you'll need to adopt to mitigate those risks. Explore cloud services selection, implementation, and operations. Then, take a dive in to the wider scope of test management in the cloud. Take back the ammunition you need to convince senior management that test managers should participate during the cloud services selection to help avoid risks before implementation and, further, why you should work with IT operations to extend test activities after the system goes live.
Agile practices have proven to help software teams develop better software products while shortening delivery cycles to weeks and even days. To respond to the new challenges of cloud computing, mobility, big data, social media, and more, organizations need to extend these agile practices and principles beyond software engineering departments and into the broader organization. Adaptive leadership principles offer managers and development professionals the tools they need to accelerate the move toward agility throughout IT and the enterprise. Jim Highsmith presents the three dimensions of adaptive leadership and offers an integrated approach for helping you spread agile practices across your wider organization. Jim introduces the “riding paradox” and explores the elements of an exploring, engaging, and adaptive leadership style. Learn about the good things that can happen when you coherently articulate why agility is so critical today and then follow up with a plan of action. Find out how to build a continuous delivery capability within your company-at the team, department, and organization levels.
Data Collection and Analysis for Better RequirementsTechWell
According to studies, 64 percent of features in systems are rarely—or never—used. How does this happen? Today, the work of eliciting the customers' true needs, which often remains elusive, can be enhanced using data-driven requirements techniques. Brandon Carlson describes why traditional requirements analysis is so difficult and presents a set of seven data collection approaches and analysis techniques you can employ on your projects right away. Learn how to instrument existing applications and develop new requirements based on operational profiles of the current system. Learn to use A/B testing—a technique for trying out and analyzing alternative implementations—on your current system to determine which new features will deliver the most business value. With these tools at hand, you can help users and business stakeholders decide the best approaches and new features to meet their real needs. Now is the time to take the guesswork out of requirements and get the facts.
The document discusses six ways to improve agile success through building trust, giving ownership, letting teams make decisions, fixing processes, having the right people, and emphasizing integrity. The key points are:
1. To build trust, leaders should remove fear, validate others, accept risks together, use team-based measurements, and lead authentically.
2. To give ownership, leaders should ask questions, use a "macro leadership cube" of standing back and stepping up, and not take back ownership from teams.
3. To let teams make decisions, decisions should be based on business value through collaborative conversations, and teams should be empowered to decide for themselves.
4. To fix processes
Risk-based Testing: Not for the FaintheartedTechWell
If you’ve tried to make testing really count, you know that “risk” plays a fundamental part in deciding where to direct your testing efforts and how much testing is enough. Unfortunately, project managers often do not understand or fully appreciate the test team’s view of risk—until it is too late. Is it their problem or is it ours? After spending a year on a challenging project that was set up as purely a risk mitigation exercise, George Wilkinson saw first-hand how risk management can play a vital role in providing focus for our testing activities, and how sometimes we as testers need to improve our communication of those risks to the project stakeholders. George provides a foundation for anyone who is serious about understanding risk and employing risk-based testing on projects. He describes actions and behaviors we should demonstrate to ensure the risks are understood, thus allowing us to be more effective during testing.
Test Automation for Packaged Systems: Yes, You Can!TechWell
This document summarizes a presentation on test automation for packaged systems. The presentation was given by Chris Bushell of ThoughtWorks and covered challenges with testing commercial off-the-shelf systems, lessons learned from automating tests for an Oracle Siebel CRM system using Selenium and Sikuli, and approaches used to test a Remedy ticket management system, including modeling windows, abstractions like page objects, and using APIs to manage test data.
Agile at Scale with Scrum: The Good, the Bad, and the UglyTechWell
Come hear the story of how a business unit at one of the world's largest networking companies transitioned to Scrum in eighteen months. The good-more than forty teams in one part of the company moved quickly and are going gangbusters. The bad-an adjacent part failed in its transition. The ugly-if you're in a large company with globally distributed teams, it's not hard to torpedo Scrum adoption. Steve Spearman and Heather Gray describe Scrum adoption challenges for a multi-million line, monolithic system developed across multiple locations worldwide. They share the techniques and tools that helped them implement Scrum in just two project cycles and the reasons part of the company failed to make the leap. Find out how they gained critical executive support, moved from component-based specialization to Scrum's generalizing specialists, found enough ScrumMasters, adjusted to twelve-hour time differences, and dealt with classical PMOs. Take away concrete approaches to improve your enterprise agile conversion-and an appreciation for problems you will surely face.
Many organizations never achieve the significant benefits that are promised from automated test execution. Surprisingly often, this is due not to technical factors but to management issues, especially at system testing level. Surprisingly often, this is due not to technical factors but to management issues. Dot Graham describes the most important management concerns the test manager must address for test automation success, and helps you understand and choose the best approaches for your organization—no matter which automation tools you use or your current state of automation. Dot explains how automation affects staffing, who should be responsible for which automation tasks, how managers can best support automation efforts leading to success, and why return on investment can be dangerous and what you can realistically expect. Dot also reviews a few key technical issues that can make or break the automation effort. Come away with an example set of automation objectives and measures, and a draft test automation strategy that you can use to plan or improve your own automation.
When you're responsible for testing, it's almost a given that you will find yourself in a situation in which you feel alone and out in the cold. Management’s commitment for testing might be lacking, your colleagues in the project might be ignoring you, your team members might lack motivation, or the automated testing you had planned is more complicated and difficult than you anticipated. You feel you can't test enough, and you will be blamed for post-release quality problems. Hans Buwalda shares a number of chilly situations and offers suggestions for overcoming them, based on his experiences worldwide in large projects. Specifically, Hans focuses on management commitment, politics, project dependencies, managing expectations, motivating team members, testing and automation difficulties, and dealing with overwhelming numbers of day-to-day problems. Take away more than forty-five tips and approaches to use when temperatures drop on you.
A number of test automation ideas that at first glance seem very sensible actually contain pitfalls and problems that you should avoid. Dot Graham describes five of these “intelligent mistakes”—automated tests will find more bugs more quickly; spending a lot on a tool must guarantee great benefits; it’s necessary to automate all of our manual tests; tools are expensive so we have to show a substantial return on investment; and testing tools must be used by the testers. Dot points out that automation doesn’t find bugs; tests do. Good automation does not come out of the box and is not automatic. Automating everything may not give you better (or faster) testing. Determining the actual rate of return is not only surprisingly difficult but may actually be harmful. Turning testers into test automators may waste their skills and talents. Join Dot for a rousing discussion of intelligent mistakes—so you can be smart enough to avoid them.
In chess, the word blunder means a very bad move by someone who should know better. Even though functional test automation has been around for a long time, people still make some very bad moves and serious blunders. The most common misconception in automation is thinking that manual testing is the same as automated testing. And this misguided thinking accounts for most of the blunders in system level test automation. Dorothy Graham takes you on a tour of these blunders, including the Stable-Application Myth (you can’t start automating until the application is stable), Inside-the-Box Thinking (automating only the obvious test execution), and the Project/Non-Project Dilemma (failing to treat automation like a project by not funding or resourcing it and treating automation as only a project). Other blunders include Testing–Tools–Test, Silver Bullet, Automating the Wrong Thing, Who Needs GPS, How Hard Can It Be, and Isolationism. New skills, approaches, and objectives are needed or you’ll end up with inefficient automation, high-maintenance costs, and wasted effort. Join Dot to discover how you can avoid these common blunders and achieve valuable test automation.
It Seemed a Good Idea at the Time: Intelligent Mistakes in Test AutomationTechWell
Some test automation ideas seem very sensible at first glance but contain pitfalls and problems that can and should be avoided. Dot Graham describes five of these “intelligent mistakes”—1. Automated tests will find more bugs quicker. (Automation doesn’t find bugs, tests do.) 2. Spending a lot on a tool must guarantee great benefits. (Good automation does not come “out of the box” and is not automatic.) 3. Let’s automate all of our manual tests. (This may not give you better or faster testing, and you will miss out on some benefits.) 4. Tools are expensive so we have to show a return on investment. (This is not only surprisingly difficult but may actually be harmful.) 5. Because they are called “testing tools,” they must be tools for testers to use. (Making testers become test automators may be damaging to both testing and automation.) Join Dot for a rousing discussion of “intelligent mistakes”—so you can be smart enough to avoid them.
Why Automation Fails—in Theory and PracticeTechWell
Testers face common challenges in automation. Unfortunately, these challenges often lead to subsequent failures. Jim Trentadue explains a variety of automation perceptions and myths―the perception that a significant increase in time and people is needed to implement automation; the myth that, once automation is achieved, testers will not be needed; the myth that scripted automation will serve all the testing needs for an application; the perception that developers and testers can add automation to a project without additional time, resources, or training; the belief that anyone can implement automation. The testing organization must ramp up quickly on the test automation process and the prep-work analysis that needs to be done including when to start, how to structure the tests, and what system to start with. Learn how to respond to these common challenges by developing a solid business case for increased automation adoption by engaging manual testers in the testing organization, being technology agnostic, and stabilizing test scripts regardless of applications changes.
In chess, the word blunder means a very bad move by someone who should know better. Even though functional test automation has been around for a long time, people still make some very bad moves and serious blunders. The most common misconception in automation is thinking that manual testing is the same as automated testing. And this thinking accounts for most of the blunders in system level test automation. Dorothy Graham takes us on a tour of these blunders, including: the Stable-Application Myth (you can’t start automating until the application is stable), Inside-the-Box Thinking (automating only the obvious test execution), the Project/Non-Project Dilemma (failing to treat automation like a project by not funding or resourcing it, and treating automation as only a project). Other blunders include Testing-Tools-Test, Silver Bullet, Automating the Wrong Thing, Who Needs GPS, How Hard Can It Be, and Isolationism. Different skills, approaches, and objectives are needed or you’ll end up with inefficient automation, high maintenance costs, and wasted effort. Join Dot to discover how you can avoid these common blunders and achieve valuable test automation.
In chess, the word blunder means a very bad move by someone who should know better. Even though functional test automation has been around for a long time, people still make some very bad moves and serious blunders. The most common misconception in automation is thinking that manual testing is the same as automated testing. And this thinking accounts for most of the blunders in system level test automation. Dorothy Graham takes us on a tour of these blunders, including: the Stable-Application Myth (you can’t start automating until the application is stable), Inside-the-Box Thinking (automating only the obvious test execution), the Project/Non-Project Dilemma (failing to treat automation like a project by not funding or resourcing it, and treating automation as only a project). Other blunders include Testing-Tools-Test, Silver Bullet, Automating the Wrong Thing, Who Needs GPS, How Hard Can It Be, and Isolationism. Different skills, approaches, and objectives are needed or you’ll end up with inefficient automation, high maintenance costs, and wasted effort. Join Dot to discover how you can avoid these common blunders and achieve valuable test automation.
Top Ten Tips for Tackling Test Automation Webinar Presentation.pptxInflectra
Inflectra and Checkpoint Technologies co-hosted the webinar: Top Ten Tips for Tackling Test Automation. In this webinar, Adam Sandman (Inflectra) and Bob Crews (Checkpoint Technologies) explored the challenges surrounding test automation and offered their tips on overcoming them.
Find the recording of the Webinar on our YouTube channel: https://www.youtube.com/watch?v=vY1MbW4qWnQ
Webinar Agenda:
-Top 10 challenges of test automation with impact and solutions
-Impacts: potential risks if challenges are not overcome
-Solutions: tips to overcoming the challenges
-Automated functional testing
-Criteria of an Automation Assessment
-Addressing several challenges with Inflectra's Spira and Rapise
Webinar Presenters:
Adam Sandman is the Founder and CEO of Inflectra. He has been working in the IT industry for the past 25+ years. His areas of expertise span software architecture to agile development, software testing, test automation, and project management. He is interested in technology, business, and enabling people to follow their passions. At Inflectra, Adam is responsible for researching the tools, technologies, and processes in the software testing and quality assurance space. Adam is a prolific speaker whose speaking engagements range from StarEast, and Eurostar to STPcons, DevGeekWeek, Swiss Testing Day, NDIA, STARCanada, TestingMind, Agile DevOps West, StarWest, testCon, JFTL, and many more.
Bob Crews, Co-Founder and CEO of Checkpoint Technologies, is a consultant with 34 years of IT experience in full life-cycle development and software testing. Bob and his organization provide services and solutions focused on QA with a concentration in functional, performance and application security testing. He’s assisted organizations such as Harvard University, Raymond James, the FBI, and the Department of Veterans Affairs in developing teams, processes, and solutions to help organizations deliver higher quality software faster. He’s consulted for over 290 organizations on QA, effective software testing, strategic test planning, enhanced test automation, and risk-based testing. He’s exceptionally enthusiastic about the future of IT and software testing and believes “The best is yet ahead!”
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and MoreTechWell
Large-scale testing projects severely stress “normal” testing practices. This can result in a number of less than optimal results. A number of innovative ideas and concepts have emerged to support industrial-strength testing of large and complex projects—some successful and others not so successful. Hans Buwalda shares his experiences and the strategies he's developed over the years for large testing on large projects. He describes the possibilities and pitfalls of outsourcing test automation. Learn how to design tests specifically for automation, and how to successfully incorporate keyword testing. The automation discussion will include virtualization and cloud options, how to deal with numerous versions and configurations common to large projects, and how to handle the complexity added by mobile devices. Hans’ information is based on his nineteen years of worldwide experience with testing and test automation involving large projects with test cases executing continuously for many weeks on multiple machines.
This document discusses automation testing. It begins by defining automation testing and listing its benefits, which include saving time and money, improving accuracy, and increasing test coverage. It then covers levels of automation testing, frameworks, approaches like record and playback, modular scripting, and keyword-driven testing. The document also discusses the automation testing lifecycle, how to choose a testing tool, types of tools, when to automate and who should automate, supporting practices, and skills needed for automation testing.
How to manage your testing automation project ttm methodologyRam Yonish
מנהלים רבים וארגונים רבים מיישמים אוטומציה בתהליך הבדיקות שלהם אבל עדיין מרגישים שההחזר על ההשקעה נמוך ואף שלילי. מחקרים רבים מראים כי הבעיה נובעת מחוסר תיאום ציפיות, זיהוי לא נכון של הבעיות שהכלים באים לפתור, בחירת כלי לא מתאים ותהליך הטמעה שגוי.
מתודולוגיית TMM (Testing tools management) באה לתת מענה בדיוק לבעיות שהוצגו. המתודולוגיה כוללת הגדרת השלבים השונים בפרויקט אוטומציה, החל מהגדרת הבעיה, דרך בחירת הכלי, בחינת הכלי, הטמעה ומדידת האפקטיביות שלו לכל אורך הפרויקט
Tune Agile Test Strategies to Project and Product MaturityTechWell
For optimum results, you need to tune agile project's test strategies to fit the different stages of project and product maturity. Testing tasks and activities should be lean enough to avoid unnecessary bottlenecks and robust enough to meet your testing goals. Exploring what "quality" means for various stakeholder groups, Anna Royzman describes testing methods and styles that fit best along the maturity continuum. Anna shares her insights on strategic ways to use test automation, when and how to leverage exploratory testing as a team activity, ways to prepare for live pilots and demos of the real product, approaches to refine test coverage based on customer feedback, and techniques for designing a production "safety net" suite of automated tests. Leave with a better understanding of how to satisfy your stakeholders’ needs for quality-and a roadmap for tuning your agile test strategies.
Challenges in automation which testers face often lead to subsequent failures. Learn how to respond to these common challenges by developing a solid business case for increased automation adoption by engaging manual testers in the testing organization, being technology agnostic, and stabilizing test scripts regardless of applications changes.
DevOps Tactical Adoption Theory tries to make the transition process as smooth as possible. It hypothesis each step towards DevOps maturity should bring a visible business value empowering management and team commitment for the next step. The innovative idea here, it is not required to add the tools/processes to stack from sequential beginning to end, but seeking benefit.
The reason behind the theory is to encourage practitioners to apply each step one-by-one and then having the benefits in projects. Consequently, each step is tested in terms of utility and proved method validity for the further steps. In contrast to previous adoption models, our model indicates concrete activities rather than general statements.
Theory built on the claim that many DevOps transition projects considered problematic, impractical or even unsuccessful causing concept to become a goal more than a technique. Basically, theory consists of different areas of interest describing various actions on a schema.
In the session, it is planned to demonstrate “DevOps Tactical Adoption Theory” with focus on Delivery Pipeline/Testing Practices sectioned "Continuous Testing in DevOps".
The document discusses how to make automation an asset to software testing organizations by outlining the advantages and disadvantages of manual versus automated testing, providing examples of what types of tests are best suited for automation, and describing best practices for developing an effective test automation process and addressing common myths about automation. It emphasizes that automation can increase testing efficiency and coverage but requires proper planning, resources, and maintenance to be successful.
How To Transform the Manual Testing Process to Incorporate Test AutomationRanorex
Although most testing organizations have some automation, it's usually a subset of their overall testing efforts. Typically the processes have been previously defined, and the automation team must adapt accordingly. The major issue is that test automation work and deliverables do not always fit into a defined manual testing process.
Learn how to transform your manual testing procedures and how to incorporate test automation into your overall testing process.
In this quality assurance training session, you will learn introduction to automation testing. Topics covered in this course are:
• Introduction
• Why Automated Testing?
• What can I Automate?
• Test Automation Process
• Automation Tool
• Automation Framework
To know more, visit this link: https://www.mindsmapped.com/courses/quality-assurance/software-testing-quality-assurance-qa-training-with-hands-on-exercises/
Automation Culture: Essential to Agile SuccessTechWell
For organizations developing large-scale applications, transitioning to agile is challenging enough. If your organization has not yet adopted an automation culture, brace yourself for a big surprise because automation is essential to agile success. From the safety nets provided by automated unit and acceptance tests to the automation of build, build verification, and deployment processes, the iterative nature of agile demands a culture of automation across your engineering organization. Geoff Meyer shares lessons learned in adopting a test automation culture as the Dell Enterprise Systems Group simultaneously adopted Scrum and agile processes across its entire software product portfolio. Learn to address the practical challenges of establishing an automation culture at the outset by ensuring that your organizational makeover incorporates changes to your hiring, staffing, and training practices. Find out how you can apply automation beyond the Scrum team in areas including continuous integration, scale and stress testing, and performance testing.
Similar to Management Issues in Test Automation (20)
Isabel Evans stopped drawing and painting after being told she was not very good at it, which led to a loss of confidence in her creative and professional abilities. However, she realized that attempting creative activities is important for cognitive and emotional development, and that making mistakes and learning from failures allows for growth. By reengaging with failure through art and with support from others, Isabel was able to regain confidence in her abilities and reboot her career. The document discusses different perspectives on failure and the importance of learning from mistakes.
Instill a DevOps Testing Culture in Your Team and Organization TechWell
The DevOps movement is here. Companies across many industries are breaking down siloed IT departments and federating them into product development teams. Testing and its practices are at the heart of these changes. Traditionally, IT organizations have been staffed with mostly manual testers and a limited number of automation and performance engineers. To keep pace with development in the new “you build it, you own it” environment, testing teams and individuals must develop new technical skills and even embrace coding to stay relevant and add greater value to the business. DevOps really starts with testing. Join Adam Auerbach as he explains what DevOps is and how it relates to testing. He describes how testing must change from top to bottom and how to access your own environment to identify improvement opportunities. Adam dives into practices like service virtualization, test data management, and continuous testing so you can understand where you are now and identify steps needed to instill a DevOps testing culture in your team and organization.
Test Design for Fully Automated Build ArchitectureTechWell
This document summarizes a half-day tutorial on test design for fully automated build architectures presented by Melissa Benua of mParticle at STAREAST 2018. The tutorial covered guiding principles for test design including prioritizing important and reliable tests, structuring automated pipelines around components, packages, and releases, and monitoring test results through code coverage, flaky test handling, and logging versus counters. It also included exercises mapping test cases to functional boundaries and categories of tests to pipeline stages.
Build Your Mobile App Quality and Test StrategyTechWell
Let’s build a mobile app quality and testing strategy together. Whether you have a web, hybrid, or native app, building a quality and testing strategy means (1) knowing what data and tools you have available to make agile decisions, (2) understanding your customers and your competitors, and (3) testing your app under real-world conditions. Jason Arbon guides you through the latest techniques, data, and tools to ensure the awesomeness of your mobile app quality and testing strategy. Leave this interactive session with a strategy for your very own app—or one you pretend to own. The information Jason shares is based on data from Appdiff’s next-gen mobile app testing platform, lessons from Applause/uTest’s crowd, text mining hundreds of millions of app store reviews, and in-depth discussions with top mobile app development teams.
Testing Transformation: The Art and Science for SuccessTechWell
Technologies, testing processes, and the role of the tester have evolved significantly in the past few years with the advent of agile, DevOps, and other new technologies. It is critical that we testing professionals evaluate ourselves and continue to add tangible value to our organizations. In your work, are you focused on the trivial or on real game changers? Jennifer Bonine describes critical elements that help you artfully blend people, process, and technology to create a synergistic relationship that adds value. Jennifer shares ideas on mastering politics, maneuvering core vs. context, and innovating your technology strategies and processes. She explores how new processes can be introduced in an organization, what the role of organizational culture is in determining the success of a project, and how you can know what tools will add value vs. simply adding overhead and complexity. Jennifer reviews critically needed tester skills and discusses a continual learning model to evolve your skills and stay relevant. This discussion can lead you to technologies, processes, and skills you can stake your career on.
We’ve all been there. We work incredibly hard to develop a feature and design tests based on written requirements. We build a detailed test plan that aligns the tests with the software and the documented business needs. And when we put the tests to the software, it all falls apart because the requirements were changed without informing everyone. Mary Thorn says help is at hand. Enter behavior-driven development (BDD), and Cucumber and SpecFlow, tools for running automated acceptance tests and facilitating BDD. Mary explores the nuances of Cucumber and SpecFlow, and shows you how to implement BDD and agile acceptance testing. By fostering collaboration for implementing active requirements via a common language and format, Cucumber and SpecFlow bridge the communication gap between business stakeholders and implementation teams. In this workshop, practice writing feature files with the best practices Mary has discovered over numerous implementations. If you experience developers not coding to requirements, testers not getting requirements updates, or customers who feel out of the loop and don’t get what they ask for, Mary has answers for you.
Develop WebDriver Automated Tests—and Keep Your SanityTechWell
Many teams go crazy because of brittle, high-maintenance automated test suites. Jim Holmes helps you understand how to create a flexible, maintainable, high-value suite of functional tests using Selenium WebDriver. Learn the basics of what to test, what not to test, and how to avoid overlapping with other types of testing. Jim includes both philosophical concepts and hands-on coding. Testers who haven't written code should not be intimidated! We'll pair you up to make sure you're successful. Learn to create practical tests dealing with advanced situations such as input validation, AJAX delays, and working with file downloads. Additionally, discover when you need to work together with developers to create a system that's more easily testable. This tutorial focuses primarily on automating web tests, but many of the same concepts can be applied to other UI environments. Demos and labs will be in C# and Java using WebDriver. Leave this tutorial having learned how to write high-value WebDriver tests—and stay sane while doing so.
DevOps is a cultural shift aimed at streamlining intergroup communication and improving operational efficiency for development and operations groups. Over time, inclusion of other IT groups under the DevOps umbrella has become the norm for many organizations. But even broadening the boundaries of DevOps, the conversation has been largely devoid of the business units’ place at the table. A common mistake organizations make while going through the DevOps transformation is drawing a line at the IT boundary. If that occurs, a larger, more inclusive silo within the organization is created, operating in an informational vacuum and causing operational inefficiency and goal misalignment. Sharing his experiences working on both sides of the fence, Leon Fayer describes the importance of including business units in order to align technology decisions with business goals. Leon discusses inclusion of business units in existing agile processes, benefits of cross-departmental monitoring, and a business-first approach to technology decisions.
Eliminate Cloud Waste with a Holistic DevOps StrategyTechWell
Chris Parlette maintains that renting infrastructure on demand is the most disruptive trend in IT in decades. In 2016, enterprises spent $23B on public cloud IaaS services. By 2020, that figure is expected to reach $65B. The public cloud is now used like a utility, and like any utility, there is waste. Who's responsible for optimizing the infrastructure and reducing wasted expenses? It’s DevOps. The excess expense, known as cloud waste, comprises several interrelated problems: services running when they don't need to be, improperly sized infrastructure, orphaned resources, and shadow IT. There are a few core tenets of DevOps—holistic thinking, no silos, rapid useful feedback, and automation—that can be applied to reducing your cloud waste. Join Chris to learn why you should include continuous cost optimization in your DevOps processes. Automate cost control, reduce your cloud expenses, and make your life easier.
Transform Test Organizations for the New World of DevOpsTechWell
With the recent emergence of DevOps across the industry, testing organizations are being challenged to transform themselves significantly within a short period of time to stay meaningful within their organizations. It’s not easy to plan and approach these changes considering the way testing organizations have remained structured for ages. These challenges start from foundational organizational structures and can cut across leadership influence, competencies, tools strategy, infrastructure, and other dimensions. Sumit Kumar shares his experience assisting various organizations to overcome these challenges using an organized DevOps enablement framework. The framework includes radical restructuring, turning the tools strategy upside down, a multidimensional workforce enablement supported by infrastructure changes, redeveloped collaborations models, and more. From his real world experiences Sumit shares tips for approaching this journey and explains the roadmap for testing organizations to transform themselves to lead the quality in DevOps.
The Fourth Constraint in Project Delivery—LeadershipTechWell
All too often, the triple constraints—time, cost, and quality—are bandied about as if they are the be-all, end-all. While they are important, leadership—the fourth and larger underpinning constraint—influences the first three. Statistics on project success and failure abound, and these measurements are usually taken against the triple constraints. According to the Project Management Institute, only 53 percent of projects are completed within budget, and only 49 percent are completed on time. If so many projects overrun budget and are late, we can’t really say, “Good, fast, or cheap—pick two.” Rob Burkett talks about leadership at every level of a team. He shares his insights and stories gleaned from his years of IT and project management experience. Rob speaks to some of the glaring difficulties in the workplace in general and some specifically related to IT delivery and project management. Leave with a clearer understanding of how to communicate with teams and team members, and gain a better understanding of how you can be a leader—up and down your organization.
Resolve the Contradiction of Specialists within Agile TeamsTechWell
As teams grow, organizations often draw a distinction between feature teams, which deliver the visible business value to the user, and component teams, which manage shared work. Steve Berczuk says that this distinction can help organizations be more productive and scale effectively, but he recognizes that not all shared work fits into this model. Some work is best handled by “specialists,” that is people with unique skills. Although teams composed entirely of T-shaped people is ideal, certain skills are hard to come by and are used irregularly across an organization. Since these specialists often need to work closely with teams, rather than working from their own backlog, they don’t fit into the component team model. The use of shared resources presents challenges to the agile planning model. Steve Berczuk shares how teams such as those providing infrastructure services and specialists can fit into a feature+component team model, and how variations such as embedding specialists in a scrum team can both present process challenges and add significant value to both the team and the larger organization.
Pin the Tail on the Metric: A Field-Tested Agile GameTechWell
Metrics don’t have to be a necessary evil. If done right, metrics can help guide us to make better forward-looking decisions, rather than being used for simply managing or monitoring. They can help us identify trade-offs between options for what to do next versus punitive or worse, purely managerial measures. Steve Martin won’t be giving the Top Ten List of field-tested metrics you should use. Instead, in this interactive mini-workshop, he leads you through the critical thinking necessary for you to determine what is right for you to measure. First, Steve explores why you want to measure something—whether it’s for a team, a portfolio, or even an agile transformation. Next, he provides multiple real-life metrics examples to help drive home concepts behind characteristics of good and bad metrics. Finally, Steve shows how to run his field-tested agile game—Pin the Tail on the Metric. Take back this activity to help you guide metrics conversations at your organization.
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsTechWell
A hierarchy is an organizational network that has a top and a bottom, and where position is determined by rank, importance, and value. A holarchy is a network that has no top or bottom and where each person’s value derives from his ability, rather than position. As more companies seek the benefits of agile, leaders need to build and sustain delivery capability while scaling agile without introducing unnecessary process and overhead. The Agile Performance Holarchy (APH) is an empirical model for scaling and sustaining agility while continuing to deliver great products. Jeff Dalton designed the APH by drawing from lessons learned observing and assessing hundreds of agile companies and teams. The APH helps implement a holarchy—a system composed of interacting organizational units called holons—centered on a series of performance circles that embody the behaviors of high performing agile organizations. Jeff describes how APH provides guidelines in the areas of leadership, values, teaming, visioning, governing, building, supporting, and engaging within an all-agile organization. Join Jeff to see what the APH is all about and how you can use it in your team and organization.
A Business-First Approach to DevOps ImplementationTechWell
DevOps is a cultural shift aimed at streamlining intergroup communication and improving operational efficiency for development and operations groups. Over time, inclusion of other IT groups under the DevOps umbrella has become the norm for many organizations. But even broadening the boundaries of DevOps, the conversation has been largely devoid of the business units’ place at the table. A common mistake organizations make while going through the DevOps transformation is drawing a line at the IT boundary. If that occurs, a larger, more inclusive silo within the organization is created, operating in an informational vacuum and causing operational inefficiency and goal misalignment. Sharing his experiences working on both sides of the fence, Leon Fayer describes the importance of including business units in order to align technology decisions with business goals. Leon discusses inclusion of business units in existing agile processes, benefits of cross-departmental monitoring, and a business-first approach to technology decisions.
Databases in a Continuous Integration/Delivery ProcessTechWell
The document summarizes a presentation about including databases in a continuous integration/delivery process. It discusses treating database code like application code by placing it under version control and integrating databases into the DevOps software development pipeline. This allows databases to be built, tested, and released like other software through continuous integration, delivery, and deployment.
Mobile Testing: What—and What Not—to AutomateTechWell
Organizations are moving rapidly into mobile technology, which has significantly increased the demand for testing of mobile applications. David Dangs says testers naturally are turning to automation to help ease the workload, increase potential test coverage, and improve testing efficiency. But should you try to automate all things mobile? Unfortunately, the answer is not always clear. Mobile has its own set of complications, compounded by a wide variety of devices and OS platforms. Join David to learn what mobile testing activities are ripe for automation—and those items best left to manual efforts. He describes the various considerations for automating each type of mobile application: mobile web, native app, and hybrid applications. David also covers device-level testing, types of testing, available automation tools, and recommendations for automation effectiveness. Finally, based on his years of mobile testing experience, David provides some tips and tricks to approach mobile automation. Leave with a clear plan for automating your mobile applications.
Cultural Intelligence: A Key Skill for SuccessTechWell
Diversity is becoming the norm in everyday life. However, introducing global delivery models without a proper understanding of intercultural differences can lead to difficulty, frustration, and reduced productivity. Priyanka Sharma and Thena Barry say that in our diverse world, we need teams with people who can cross these boundaries, communicate effectively, and build the diverse networks necessary to avoid problems. We need to learn about cultural intelligence (CI) and cultural quotient (CQ). CI is the ability to relate and work effectively across cultures. CQ is the cognitive, motivational, and behavioral capacity to understand and respond to beliefs, values, attitudes, and behaviors of individuals and groups. Together, CI and CQ can help us build behavioral capacities that aid motivation, behavior, and productivity in teams as well as individuals. Priyanka and Thena show how to build a more culturally intelligent place with tools and techniques from Leading with Cultural Intelligence, as well as content from the Hofstede cultural model. In addition, they illustrate the model with real-life experiences and demonstrate how they adapted in similar circumstances.
Turn the Lights On: A Power Utility Company's Agile TransformationTechWell
Why would a century-old utility with no direct competitors take on the challenge of transforming its entire IT application organization to an agile methodology? In an increasingly interconnected world, the expectations of customers continue to evolve. From smart meters to smart phones, IoT is creating a crisis point for industries not accustomed to rapid change. Glen Morris explains that pizzas can be tracked by the minute and packages at every stop, and customers now expect this same customer service model should exist for all industries—including power. Glen examines how to create momentum and transform non-IT-focused industries to an agile model. If you are struggling with gaining traction in your pursuit of agile within your business, Glen gives you concrete, practical experiences to leverage in your pursuit. Finally, he communicates how to gain buy-in from business partners who have no idea or concern about agile or its methodologies. If your business partners look at you with amusement when you mention the need for a dedicated Product Owner, join Glen as he walks you through the approaches to overcoming agile skepticism.
Scale: The Most Hyped Term in Agile Development TodayTechWell
Scrum is everywhere. More than 90 percent of agile teams use it. But for many organizations wanting to scale agile, one team using Scrum is not enough. Dave West says the Nexus Framework, created by Ken Schwaber, the co-creator of Scrum, provides an exoskeleton for Scrum. Nexus allows multiple teams to work together to produce an integrated increment regularly. It addresses the key challenges of scaling agile development by adding new yet minimal events, artifacts, and roles to the Scrum framework. Dave discusses Nexus, addresses its boundaries, and explains what else is needed for agile to thrive in an organization. Dave explores how organizations have transitioned to agile, and examines their successes and challenges in implementing Scrum, how they envision scaling with Nexus, and goals for creating a Scrum Studio.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
1. TD
AM Tutorial
4/30/13 8:30AM
Management Issues in Test
Automation
Presented by:
Dorothy Graham
Software Test Consultant
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
2. Dorothy Graham
In testing for more than thirty years, Dorothy Graham is coauthor of four books—Software Inspection,
Software Test Automation, Foundations of Software Testing, and Experiences of Test Automation: Case
Studies of Software Test Automation. Dot was a founding member of the ISEB Software Testing Board, a
member of the working party that developed the first ISTQB Foundation Syllabus, and served on the
boards of conferences and publications in software testing. A popular and entertaining speaker at
conferences and seminars worldwide, she has been coming to STAR conferences since the first one in
1992. Dot holds the European Excellence Award in Software Testing. Learn more about Dot at
DorothyGraham.co.uk.
3. Management Issues in Test Automation
Contents
Session 0: Introduction to the tutorial
Tutorial objectives
What we cover (and don’t cover) today
Session 1: Planning and Managing Test Automation
Responsibilities
Pilot project
Test automation objectives (and exercise)
Return on Investment (ROI)
Session 2: Technical Issues for Managers
Testware architecture
Scripting, keywords and Domain-Specific Test Language (DSTL)
Automating more than execution
Session 3: Final Advice, Strategy and Conclusion
Final advice
Strategy exercise
Conclusion
Appendix (useful stuff)
That’s no reason to automate (Better Software article)
Man and Machine, Jonathan Kohl (Better Software)
Technical vs non-technical skills in test automation
62. “Why automate?”
This seems such
an easy question to answer; yet many
people don’t achieve the success they
hoped for. If you are aiming in the wrong
direction, you will not hit your target!
This article explains why some
testing objectives don’t work for automation, even though they may be very
sensible goals for testing in general. We
take a look at what makes a good test
automation objective; then we examine
six commonly held—but misguided—
objectives for test execution automation,
explaining the good ideas behind them,
where they fail, and how these objectives
can be modified for successful test automation.
Good Objectives for Test
Automation
A good objective for test automation
should have a number of characteristics.
First of all, it should be measurable so
that you can tell whether or not you
have achieved it.
Objectives for test automation should
support testing activities but should not
be the same as the objectives for testing.
Testing and automation are different and
distinct activities.
Objectives should be realistic and
achievable; otherwise, you will set yourself up for failure. It is better to have
smaller-scale goals that can be met than
far-reaching goals that seem impossible.
Of course, many small steps can take
you a long way!
Automation objectives should be
both short and long term. The shortterm goals should focus on what can be
achieved in the next month or quarter.
The long-term goals focus on where you
want to be in a year or two.
Objectives should be regularly revised
in the light of experience.
Misguided Objectives for
Test Automation
Objective 1: Find More Bugs
Good ideas behind this objective:
• Testing should find bugs, so automated testing should find them
quicker.
• Since tests are run quicker, we can
run more tests and find even more
bugs.
• We can test more of the system
so we should also find bugs in
the parts we weren’t able to test
manually.
Basing the success of automation on
finding bugs—especially the automation of regression tests—is not a good
thing to do for several reasons. First, it
is the quality of the tests that determines
whether or not bugs are found, and this
has very little, if anything, to do with
automation. Second, if tests are first run
manually, any bugs will be found then,
and they may be fixed by the time the
automated tests are run. Finally, it sets
an expectation that the main purpose of
test automation is to find bugs, but this
is not the case: A repeated test is much
less likely to find a new bug than a new
test. If the software is really good, automation may be seen as a waste of time
and resources.
Regression testing looks for unexpected, detrimental side effects in unchanged software. This typically involves running a lot of tests, many of
which will not find any defects. This is
ideal ground for test automation as it
can significantly reduce the burden of
this repetitive work, freeing the testers
to focus on running manual tests where
more defects are likely to be. It is the
testing that finds bugs—not the automation. It is the testers who may be able to
find more bugs, if the automation frees
them from mundane repetitive work.
The number of bugs found is a misleading measure for automation in any
case. A better measure would be the percentage of regression bugs found (compared to a currently known total). This
is known as the defect detection percentage (DDP). See the StickyNotes for
more information.
Sometimes this objective is phrased
in a slightly different way: “Improve
the quality of the software.” But identifying bugs does nothing to improve
software—it is the fixing of bugs that
improves the software, and this is a development task.
If finding more bugs is something that
you want to do, make it an objective for
measuring the value of testing, not for
measuring the value of automation.
Better automation objective: Help teswww.StickyMinds.com
ters find more regression bugs (so fewer
regression failures occur in operation).
This could be measured by increased
DDP for regression bugs, together with
a rating from the testers about how well
the automation has supported their objectives.
Objective 2: Run Regression Tests
Overnight and on Weekends
Good ideas behind this objective:
• We have unused resources (evenings and weekends).
• We could run automated tests
“while we sleep.”
At first glance, this seems an excellent
objective for test execution automation,
and it does have some good points.
Once you have a good set of automated regression tests, it is a good idea
to run the tests unattended overnight
and on weekends, but resource use is not
the most important thing.
What about the value of the tests
that are being run? If the regression
tests that would be run “off peak” are
really valuable tests, giving confidence
that the main areas of the system are still
working correctly, then this is useful.
But the focus needs to be on supporting
good testing.
It is too easy to meet this stated objective by just running any test, whether it
is worth running or not. For example, if
you ran the same one test over and over
again every night and every weekend,
you would have achieved the goal as
stated, but it is a total waste of time
and electricity. In fact, we have heard of
someone who did just this! (We think he
left the company soon after.)
Of course, automated tests can be run
much more often, and you may want
some evidence of the increased test execution. One way to measure this is using
equivalent manual test effort (EMTE).
For all automated tests, estimate how
long it would have taken to run those
tests manually (even though you have no
intention of doing so). Then each time
the test is run automatically, add that
EMTE to your running total.
Better automation objective: Run the
most important or most useful tests, employing under-used computer resources
when possible. This could be partially
JULY/AUGUST 2009
BETTER SOFTWARE
33
63. measured by the increased use of resources and by EMTE, but should also
include a measure of the value of the
tests run, for example, the top 25 percent of the current priority list of most
important tests (priority determined by
the testers for each test cycle).
Objective 3: Reduce Testing Staff
Good ideas behind this objective:
• We are spending money on the
tool, so we should be able to save
elsewhere.
• We want to reduce costs overall,
and staff costs are high.
This is an objective that seems to
be quite popular with managers. Some
managers may go even further and think
that the tool will do the testing for them,
so they don’t need the testers—this is
just wrong. Perhaps managers also think
that a tool won’t be as argumentative as
a tester!
It is rare that staffing levels are reduced when test automation is introduced; on the contrary, more staff are
usually needed, since we now need
people with test script development skills
in addition to people with testing skills.
You wouldn’t want to let four testers go
and then find that you need eight test automators to maintain their tests!
Automation supports testing activities; it does not usurp them. Tools cannot
make intelligent decisions about which
tests to run, when, and how often. This
is a task for humans able to assess the
current situation and make the best use
of the available time and resources.
Furthermore, automated testing is
not automatic testing. There is much
work for people to do in building the automated tests, analyzing the results, and
maintaining the testware.
Having tests automated does—or at
least should—make life better for testers.
The most tedious and boring tasks are
the ones that are most amenable for automation, since the computer will happily do repetitive tasks more consistently
and without complaining. Automation
can make test execution more efficient,
but it is the testers who make the tests
themselves effective. We have yet to see
a tool that can think up tests as well as a
human being can!
34
BETTER SOFTWARE
JULY/AUGUST 2009
The objective as stated is a management objective, not an appropriate objective for automation. A better management objective is “Ensure that everyone
is performing tasks they are good at.”
This is not an automation objective
either, nor is “Reducing the cost of
testing.” These could be valid objectives,
but they are related to management, not
automation.
Better automation objective: The total
cost of the automation effort should be
significantly less than the total testing effort saved by the automation. This could
be partially measured by an increase in
tests run or coverage achieved per hour
of human effort.
Objective 4: Reduce Elapsed Time
for Testing
Good ideas behind this objective:
• Reduce deadline pressure—any
way we can save time is good.
• Testing is a bottleneck, so faster
testing will help overall.
• We want to be quicker to market.
This one seems very sensible at first
and sometimes it is even quantified—
“Reduce elapsed time by X%”—which
sounds even more impressive. However,
this objective can be dangerous because
of confusion between “testing” and “test
execution.”
The first problem with this objective is that there are much easier ways
to achieve it: run fewer tests, omit long
tests, or cut regression testing. These are
not good ideas, but they would achieve
the objective as stated.
The second problem with this objective is its generality. Reducing the
elapsed time for “testing” gives the impression we are talking about reducing
the elapsed time for testing as a whole.
However, test execution automation
tools are focused on the execution of
the tests (the clue is in the name!) not
the whole of testing. The total elapsed
time for testing may be reduced only if
the test execution time is reduced sufficiently to make an impact on the whole.
What typically happens, though, is that
the tests are run more frequently or
more tests are run. This can result in
more bugs being found (a good thing),
that take time to fix (a fact of life), and
www.StickyMinds.com
increase the need to run the tests again
(an unavoidable consequence).
The third problem is that there are
many factors other than execution that
contribute to the overall elapsed time
for testing: How long does it take to set
up the automated run and clear up after
it? How long does it take to recognize a
test failure and find out what is actually
wrong (test fault, software fault, environment problem)? When you are testing
manually, you know the context—you
know what you have done just before
the bug occurs and what you were doing
in the previous ten minutes. When a tool
identifies a bug, it just tells you about the
actual discrepancy at that time. Whoever
analyzes the bug has to put together the
context for the bug before he or she can
really identify the bug.
In figures 1 and 2, the blocks represent the relative effort for the different
activities involved in testing. In manual
testing, there is time taken for editing
tests, maintenance, set up of tests, executing the tests (the largest component
of manual testing), analyzing failures,
and clearing up after tests have completed. In figure 1, when those same tests
are automated, we see the illusion that
automating test execution will save us
a lot of time, since the relative time for
execution is dramatically reduced. However, figure 2 shows us the true picture—
total elapsed time for testing may actually increase, even though the time for
test execution has been reduced. When
test automation is more mature, then the
total elapsed time for all of the testing
activities may decrease below what it
was initially for manual testing. Note
that this is not to scale; the effects may
be greater than we have illustrated.
We now can see that the total elapsed
time for testing depends on too many
things that are outside the control or influence of the test automator.
The main thing that causes increased
testing time is the quality of the software—the number of bugs that are already there. The more bugs there are,
the more often a test fails, the more bug
reports need to be written up, and the
more retesting and regression testing
are needed. This has nothing to do with
whether or not the tests are automated or
manual, and the quality of the software
64. is the responsibility of the developers,
not the testers or the test automators.
Finally, how much time is spent maintaining the automated tests? Depending
on the test infrastructure, architecture,
or framework, this could add considerably to the elapsed time for testing.
Maintenance of the automated tests for
later versions of the software can consume a lot of effort that also will detract
from the savings made in test execution.
This is particularly problematic when
the automation is poorly implemented,
without thought for maintenance issues
when designing the testware architecture. We may achieve our goal with the
first release of software, but later versions may fail to repeat the success and
may even become worse.
Here is how the automator and tester
should work together: The tester may
request automated support for things
that are difficult or time consuming, for
example, a comparison or ensuring that
files are in the right place before a test
runs. The automator would then provide utilities or ways to do them. But the
automator, by observing what the tester
is doing, may suggest other things that
could be supported and “sell” additional
tool support to the tester. The rationale
is to make life easier for the tester and
to make the testing faster, thus reducing
elapsed time.
Better automation objective: Reduce
the elapsed time for all tool-supported
Figure 1
Figure 2
www.StickyMinds.com
testing activities. This is an ongoing
objective for automation, seeking to
improve both manual and existing automated testing. It could be measured by
elapsed time for specified testing activities, such as maintenance time or failure
analysis time.
Objective 5: Run More Tests
Good ideas behind this objective:
• Testing more of the software gives
better coverage.
• Testing is good, so more testing
must be better.
More is not better! Good testing is
not found in the number of tests run, but
in the value of the tests that are run. In
fact, the fewer tests for the same value,
the better. It is definitely the quality of
the tests that counts, not the quantity.
Automating a lot of poor tests gives you
maintenance overhead with little return.
Automating the best tests (however many
that is) gives you value for the time and
money spent in automating them.
If we do want to run more tests, we
need to be careful when choosing which
additional tests to run. It may be easier
to automate tests for one area of the
software than for another. However, if it
is more valuable to have automated tests
for this second area than the first, then
automating a few of the more difficult
tests is better than automating many of
the easier (and less useful) tests.
A raw count of the number of automated tests is a fairly useless way of
gauging the contribution of automation
to testing. For example, suppose testers
decide there is a particular set of tests
that they would like to automate. The
real value of automation is not that the
tests are automated but the number of
times they are run. It is possible that the
testers make the wrong choice and end
up with a set of automated tests that
they hardly ever use. This is not the fault
of the automation, but of the testers’
choice of which tests to automate.
It is important that automation is
responsive, flexible, and able to automate different tests quickly as needed.
Although we try to plan which tests to
automate and when, we should always
start automating the most important
tests first. Once we are running the tests,
JULY/AUGUST 2009
BETTER SOFTWARE
35
65. the testers may discover new information
that shows that different tests should be
automated rather than the ones that had
been planned. The automation regime
needs to be able to cope with a change of
direction without having to start again
from the beginning.
During the journey to effective test
automation, it may take far longer to automate a test than to run that test manually. Hence, trying to automate may
lead, in the short term at least, to running fewer tests, and this may be OK.
Better automation objective: Automate the optimum number of the most
useful and valuable tests, as identified
by the testers. This could be measured
as the number or percentage automated
out of the valuable tests identified.
Objective 6: Automate X% of
Testing
Good ideas behind this objective:
• We should measure the progress
of our automation effort.
• We should measure the quality of
our automation.
This objective is often seen as “Automate 100 percent of testing.” In this
form, it looks very decisive and macho!
The aim of this objective is to ensure
that a significant proportion of existing
manual tests is automated, but this may
not be the best idea.
A more important and fundamental
point is to ask about the quality of the
tests that you already have, rather than
how many of them should be automated. The answer might be none—let’s
have better tests first! If they are poor
tests that don’t do anything for you,
automating them still doesn’t do anything for you (but faster!). As Dorothy
Graham has often been quoted, “Automated chaos is just faster chaos.”
If the objective is to automate 50 percent of the tests, will the right 50 percent
be automated? The answer to this will
depend on who is making the decisions
and what criteria they apply. Ideally, the
decision should be made through negotiation between the testers and the automators. This negotiation should weigh
the cost of automating individual tests
or sets of tests, and the potential costs of
maintaining the tests, against the value
36
BETTER SOFTWARE
JULY/AUGUST 2009
Figure 3
of automating those tests. We’ve heard
of one automated test taking two weeks
to build when running the test manually
took only thirty minutes—and it was
only run once a month. It is difficult to
see how the cost of automating this test
will ever be repaid!
What percentage of tests could be automated? First, eliminate those tests that
are actually impossible or totally impractical to automate. For example, a test
that consists of assessing whether the
screen colors work well together is not
a good candidate for automation. Automating 2 percent of your most important
and often-repeated tests may give more
benefit than automating 50 percent of
tests that don’t provide much value.
Measuring the percentage of manual
tests that have been automated also
leaves out a potentially greater benefit of
automation—there are tests that can be
done automatically that are impossible
or totally impractical to do manually. In
figure 3 we see that the best automation
includes tests that don’t make sense as
manual tests and does not include tests
that make sense only as manual tests.
Automation provides tool support
for testing; it should not simply automate tests. For example, a utility could
be developed by the automators to make
comparing results easier for the testers.
This does not automate any tests but
may be a great help to the testers, save
them a lot of time, and make things
much easier for them. This is good automation support.
www.StickyMinds.com
Better automation objective: Automation should provide valuable support to
testing. This could be measured by how
often the testers used what was provided
by the automators, including automated
tests run and utilities and other support.
It could also be measured by how useful
the testers rated the various types of support provided by the automation team.
Another objective could be: The number
of additional verifications made that
couldn’t be checked manually. This could
be related to the number of tests, in the
form of a ratio that should be increasing.
What are your objectives for test
execution automation? Are they good
ones? If not, this may seriously impact
the success of your automation efforts.
Don’t confuse objectives for testing with
objectives for automation. Choose more
appropriate objectives and measure the
extent to which you are achieving them,
and you will be able to show how your
automation efforts benefit your organization. {end}
Sticky
Notes
For more on the following topics go to
www.StickyMinds.com/bettersoftware.
n
n
Dorothy Graham’s blog on DDP and
test automation
Software Test Automation