DevFest 14th Dec 2019 Bishkek
- Alan Richardson
https://www.eviltester.com/conference/devfestbishkek2019_conference
- EvilTester.com
- @EvilTester
- CompendiumDev.co.uk
---
Have you ever wondered how other people test applications? Not in theory, but in practice? What thought processes are used? How did they model the application? What tools were used? How did they track the testing? That's what this talk is all about. This talk will be based on a short Case Study of testing an open source web application. Why open source? Because then there is no commercial confidentiality about the process, tools or thought processes.
---
Alan will explain his thought processes, coverage, approaches, tools used, risks identified and results found. And generalise from this into reusable models and principles that can be applied to your testing. This covers the What?, and the Why? of practical exploratory web testing.
Much of the automating we do to support testing involves detecting change. Once our tests pass, they fail when the system changes and the automated execution alerts us to the change. There are other ways that automating can help us.
Secrets and Mysteries of Automated Execution Keynote slidesAlan Richardson
Test Automation, Programming Automation, Automated Execution. This presentations contains some high level models, abstractions and approaches for effective, non-flakey and maintainable automation.
https://www.eviltester.com
In this talk I'm going to focus on the technical aspects of 'test automation', using examples of approaches from a variety of Agile projects where we automated APIs, and GUIs. You'll learn about the use of abstractions and how to think about modeling the system in code to support automating it. Also how to use these abstractions to support stress testing, exploratory testing, ongoing CI assertions and the testing process in general. I'll also discuss the different styles of coding used to support automating tactically vs automating strategically.
Test Bash Netherlands Alan Richardson "How to misuse 'Automation' for testing...Alan Richardson
We often hear about how ‘test automation’ can go wrong, which is all fine and dandy for the pessimists in the audience, and balancing feel good positive case studies exist for the optimists. But what about the anarchists? What about the rule breakers? What about the pragmatists? In this talk Alan will explain how to ‘misuse’ the ‘automation’ tools you’ve heard so much about, because you need to get things done. You’ve no doubt heard that ‘Cucumber is not a test tool’, and you’ve no doubt noticed that people use Cucumber during their testing. It’s misuse cases like this that we will celebrate, and as a bonus, you’ll learn what Cucumber ‘really' is. We’ll look at other tools; to find out their true nature and how you can turn it to your personal advantage. To further groom you for success, we’ll explain the mental models which give you guilt free flexibility in your approach. If you’ve ever wanted an ‘expert’ to quote to give you permission to use the tools how you want, this is the talk for you.
I'm going to be talking about finding the 'essence' of the tool, rather than what everyone 'says' about it, and that can lead to a radical overhaul in your beliefs and usage of the specific tool.
I blogged about my slide creation process for the conference, and there are some sneak peaks of some possible content in there are well.
http://blog.eviltester.com/2016/10/a-case-study-in-creating-conference.html
Automating Tactically vs Strategically SauceCon 2020Alan Richardson
One of the biggest concepts that has made a difference to my programming and automating in recent years is the concept of “Tactical vs. Strategic.” Automating tactically might be for a specific purpose, possibly small, possibly a bit rough around the edges, not necessarily completely robust for everyone, etc. And Strategic automation is more critical to long-term aims, maintained and maintainable, etc.
In this talk, Alan Richardson will provide examples of automating both Strategically and Tactically for activities as diverse as supporting testing, marketing and general life. We will also consider how and when to move from automating tactically to strategically, and how the concept has helped me change my programming style and how to write better code.
Automating Strategically or Tactically when TestingAlan Richardson
"Test Automation" can be viewed as strategic or tactical.
This presentation describes reasons for making this distinction and how you know if you are working strategically or tactically when you automate as part of your test approach.
Slides for Automation Guild 2016 Conference
If you want to automate, you learn to code, and you learn to code well.
“Automate” doesn’t mean “Automate Testing” it means “Automate part of your test process”.
You need to learn to code to do that with the most options open to you.
We’ll look at some ‘we do this alot’ and ‘we want to automate’ activities which we can use tools for. But we’ll also see that we are limited by the tools.
When we code, we can do a lot with minimum code, and gain a lot more flexibility.
Then we’ll cover how to think about learning to code.
solve a problem quickly (automate tactically)
solve a problem for the long term (automate strategically)
To work strategically we need to learn:
to code well,
understand refactoring,
libraries vs frameworks,
abstractions,
etc.
This talk isn’t just for beginners, we’ll cover stuff that should make it useful for the experts in the audience.
We’ll cover a lot in 45 mins, with code examples and tool examples, and I’ll make it all pretty practical.
For more details visit:
https://www.compendiumdev.co.uk/page/tag2017
Much of the automating we do to support testing involves detecting change. Once our tests pass, they fail when the system changes and the automated execution alerts us to the change. There are other ways that automating can help us.
Secrets and Mysteries of Automated Execution Keynote slidesAlan Richardson
Test Automation, Programming Automation, Automated Execution. This presentations contains some high level models, abstractions and approaches for effective, non-flakey and maintainable automation.
https://www.eviltester.com
In this talk I'm going to focus on the technical aspects of 'test automation', using examples of approaches from a variety of Agile projects where we automated APIs, and GUIs. You'll learn about the use of abstractions and how to think about modeling the system in code to support automating it. Also how to use these abstractions to support stress testing, exploratory testing, ongoing CI assertions and the testing process in general. I'll also discuss the different styles of coding used to support automating tactically vs automating strategically.
Test Bash Netherlands Alan Richardson "How to misuse 'Automation' for testing...Alan Richardson
We often hear about how ‘test automation’ can go wrong, which is all fine and dandy for the pessimists in the audience, and balancing feel good positive case studies exist for the optimists. But what about the anarchists? What about the rule breakers? What about the pragmatists? In this talk Alan will explain how to ‘misuse’ the ‘automation’ tools you’ve heard so much about, because you need to get things done. You’ve no doubt heard that ‘Cucumber is not a test tool’, and you’ve no doubt noticed that people use Cucumber during their testing. It’s misuse cases like this that we will celebrate, and as a bonus, you’ll learn what Cucumber ‘really' is. We’ll look at other tools; to find out their true nature and how you can turn it to your personal advantage. To further groom you for success, we’ll explain the mental models which give you guilt free flexibility in your approach. If you’ve ever wanted an ‘expert’ to quote to give you permission to use the tools how you want, this is the talk for you.
I'm going to be talking about finding the 'essence' of the tool, rather than what everyone 'says' about it, and that can lead to a radical overhaul in your beliefs and usage of the specific tool.
I blogged about my slide creation process for the conference, and there are some sneak peaks of some possible content in there are well.
http://blog.eviltester.com/2016/10/a-case-study-in-creating-conference.html
Automating Tactically vs Strategically SauceCon 2020Alan Richardson
One of the biggest concepts that has made a difference to my programming and automating in recent years is the concept of “Tactical vs. Strategic.” Automating tactically might be for a specific purpose, possibly small, possibly a bit rough around the edges, not necessarily completely robust for everyone, etc. And Strategic automation is more critical to long-term aims, maintained and maintainable, etc.
In this talk, Alan Richardson will provide examples of automating both Strategically and Tactically for activities as diverse as supporting testing, marketing and general life. We will also consider how and when to move from automating tactically to strategically, and how the concept has helped me change my programming style and how to write better code.
Automating Strategically or Tactically when TestingAlan Richardson
"Test Automation" can be viewed as strategic or tactical.
This presentation describes reasons for making this distinction and how you know if you are working strategically or tactically when you automate as part of your test approach.
Slides for Automation Guild 2016 Conference
If you want to automate, you learn to code, and you learn to code well.
“Automate” doesn’t mean “Automate Testing” it means “Automate part of your test process”.
You need to learn to code to do that with the most options open to you.
We’ll look at some ‘we do this alot’ and ‘we want to automate’ activities which we can use tools for. But we’ll also see that we are limited by the tools.
When we code, we can do a lot with minimum code, and gain a lot more flexibility.
Then we’ll cover how to think about learning to code.
solve a problem quickly (automate tactically)
solve a problem for the long term (automate strategically)
To work strategically we need to learn:
to code well,
understand refactoring,
libraries vs frameworks,
abstractions,
etc.
This talk isn’t just for beginners, we’ll cover stuff that should make it useful for the experts in the audience.
We’ll cover a lot in 45 mins, with code examples and tool examples, and I’ll make it all pretty practical.
For more details visit:
https://www.compendiumdev.co.uk/page/tag2017
How to Improve Your Technical Test Ability - AADays 2015 KeynoteAlan Richardson
We often work on improving the testability of an application to better support our testing. And what if, in addition to this, we actively improved our "Test Ability"? Because then we can take advantage of the new and existing application features during our testing. Alan will describe the steps he has taken to improve his Test Ability. The main examples will be drawn from his experience of testing web and HTTP based applications. Alan will explain how you can use the inbuilt browser features to help you, and describe add-ons you can use. Also, how you can chain external tools like sniffers and proxies, and why you would want to. Because, and this is more important than the individual tool examples, Alan will describe how he models an application to identify gaps in his knowledge and tooling, and then improves his Technical Test Ability by filling those gaps.
Add More Security To Your Testing and Automating - Saucecon 2021Alan Richardson
Presented at SauceCon 2021, April.
More details: https://www.eviltester.com/conference/saucecon2021_conference/
Security Testing is a highly technical set of skills, covering a wide domain of knowledge that can take a long time to learn and gain proficiency. We already have enough to learn with Software Testing and even more when we add in Automating. So are there any simple ways to increase the scope of what we already do, that provide more insight into the security of our application? Answer: Yes. And in this talk we will cover practical steps, dos and don’ts to add some Security focus fast, without spending years learning how to Hack applications.
# Automating Pragmatically
Testival Meetup 20190604
## Alan Richardson
- EvilTester.com
- @EvilTester
- compendiumdev.co.uk
- digitalonlinetactics.com
---
~~~~~~~~
Title: Automating Pragmatically
The online discussions of automating can leave me confused.
- Should you automate through the GUI?
- Should GUI automating be banned?
- Do all testers need to code? Is automating part of testing
or not?
- Do we need to automate to get a job?
In this short session Alan will discuss automating
from a pragmatic and contextual position and
share how he thinks about automating.
~~~~~~~~
Your Automated Execution Does Not Have to be FlakyAlan Richardson
This webinar is for anybody who has accepted 'flaky' test automation. Alan believes that to describe and accept your test execution as flaky is merely an excuse. In this webinar he will explore the myths of flakiness, so that you never use those excuses again!
Categories of common problems with suggested solutions.
For more information visit http://eviltester.com/flaky
Risk Mitigation Using Exploratory and Technical Testing - QASymphony Webinar ...Alan Richardson
A Webinar on Risk Analysis and Management, Exploratory Testing, and Technical Testing.
I want to get across the model that I have for risks, which is that risks are “beliefs” and a result of our beliefs. We believe some things will go wrong more than others. And because our beliefs are limited but the range of risks is not, we need to somehow go beyond our beliefs and look at tools and processes for doing that.
Also we know that risk is important for testing. What I want to do in this talk is present risk as the underpinning and driving force behind everything we do in testing.
You can use risk to justify the stuff that you do as a tester. And you can use risk to derive your test scope as well as your test process.
My aim here is to tell you that I learned to work with Agility rather than work with the Agile Rituals and Definitions. And I learned to trust that working with Agility trumps Rituals and Definitions the hard way. Because sticking to rituals and definitions led to rigidity, rather than agility.
And then "What does testing look like when you adopt that mindset?"
In this presentation you will short cut your learning on the topic of Agility, so you understand "What does testing look like when you adopt an Agility mindset?". Applying this mind set naturally leads to incorporating exploratory testing, technical testing, automated execution, end to end testing and risk. Adopting this mindset allows you to fit into any Agile Software Development project and create a customized testing approach that works.
Keynote at the internal Rabobank Testing Conference on Feb 15th 2018 in Utrecht.
https://www.compendiumdev.co.uk/page/rabobank201802
The slides for the Oredev 2014 talk "confessions of an accidental Security Tester" - describing the various approaches and bad habits that I use, which allow me to stumble on to security problems.
Technical and Testing Challenges: Using the "Protect The Square" GameAlan Richardson
How good are your Technical Testing in the Browser and JavaScript skills? Put them to the test with the "Protect The Square" game.
https://www.compendiumdev.co.uk/games/buggygames/protect_the_square/protect_the_square.html
What does Technical Testing mean? For Alan, it means going beyond requirements and using Technical Information about the implementation and an understanding of the technologies used in the building of the system to add to the risk profile and use to help derive test approaches. Using Web Testing as an example we explain how approaching testing from a technical perspective changes how you view the system and how you test. Also explained, how a technical understanding leads to a different use of tooling an automation. This webinar presented 1st April 2015 to Tabara De Testare
Slides for Agile Testers Conference 2018
Technology Based Testing by Alan Richardson
What do you learn if you want to test 'beyond the acceptance criteria'? Technical risk based testing can help. In this case I'm going to use the phrase Technical Testing to cover: "identify technology based risks to drive testing". This thought process can help us make informed decisions about the scope of exploratory testing we will carry out. It also helps focus your studies on the technical knowledge appropriate for the project you are testing.
## Blurb
This requires:
- understanding of the technology
- risk identification
- tools applicable to the technology
This presentation will use a simple example to demonstrate that:
- Even simple technology can pose risk
- Combining simple technology can increase risk
- Understanding technology allows us to evaluate risk
* http://www.eviltester.com
* http://www.compendiumdev.co.uk
* https://twitter.com/eviltester
Black Ops Testing Workshop from Agile Testing Days 2014Alan Richardson
At Agile Testing Days 2014. Steve Green, Tony Bruce and Alan Richardson hosted a double track Black Ops Testing workshop, where Redmine was the target application.
Find out more about the Black Ops Testing Team: http://blackopstesting.com/page/about.html
Joy of Coding Conference 2019 slides - Alan RichardsonAlan Richardson
Adventures in Testing, Programming, Teaching, Automating and Marketing
When you already know how to code, it's easy to forget how hard some of that learning was... until you have to teach people. And if all you've ever built are applications, you don't know really know the nuances of writing code to automate them. And if you've written the code but never had to market the applications then you've not really experienced the full joy of coding.
In this presentation Alan will revisit many of his past projects to identify lessons learned. Lessons from: writing commercial and open source tools, multi-user adventure games, REST APIs, test automation, automating applications to make them do things they are not supposed to do, and coding for technical marketing.
Some lessons we will learn:
* The 'install' is the hardest part
* Writing frameworks is too much fun and should be banned
* Applications are just "code calling other libraries"
* Writing a Text Adventure s the most fun and educational thing you'll ever code
* The Dangers of knowing how to code
We will also learn the dangers of knowing how to code and discover how our coding skills can give us an edge, in business and online live in general, if we choose to harness our skills to improve our daily experiences.
Software Testing Terms Defined. Answering the FAQ "What is Regression Testing?"
- What is Regression Testing?
- How to do Regression Testing?
- Why do we do Regression Testing?
- How to re-think Regression Testing in terms of Risk?
Slides from the Selenium Clinic Tutorial from Eurostar 2012 hosted by Simon Stewart and Alan Richardson. The tutorial was awarded "Best Tutorial" at the conference.
The reference slides were excerpted from Alan Richardson's online WebDriver course hosted at Udemy.
http://www.udemy.com/selenium-2-webdriver-basics-with-java/
Test Automation Day 2015 Keynote Alan Richardson - Practical Lessons Learned ...Alan Richardson
Practical Lessons Learned Automating in Testing.
Let us forget theory for a moment, and concentrate on the practice of automation. Alan will describe lessons learned from both success and failure;
as a tester, an automator, and a manager. But more importantly, you will discover how to apply these lessons and improve your automation.
Learn how to stay focused, how to experiment and still add value, how to manage even if you cannot code, and more...
Re-thinking Test Automation and Test Process Modelling (in pictures)Alan Richardson
- Why do we talk about Test Automation the way we do?
- Why do we talk about 100% Test Automation?
- How do we model automation as part of our Test Process?
- How does Testing provide information?
- Why was a Waterfall Test Process Different from an Agile Process?
- Why, in reality, both processes are fundamentally the same.
- How we modelled "Test Automation" incorrectly, and an alternative way to model it.
Read the associated blog post at http://blog.eviltester.com/2017/09/rethinking-test-process-automation-modelling.html
How to Improve Your Technical Test Ability - AADays 2015 KeynoteAlan Richardson
We often work on improving the testability of an application to better support our testing. And what if, in addition to this, we actively improved our "Test Ability"? Because then we can take advantage of the new and existing application features during our testing. Alan will describe the steps he has taken to improve his Test Ability. The main examples will be drawn from his experience of testing web and HTTP based applications. Alan will explain how you can use the inbuilt browser features to help you, and describe add-ons you can use. Also, how you can chain external tools like sniffers and proxies, and why you would want to. Because, and this is more important than the individual tool examples, Alan will describe how he models an application to identify gaps in his knowledge and tooling, and then improves his Technical Test Ability by filling those gaps.
Add More Security To Your Testing and Automating - Saucecon 2021Alan Richardson
Presented at SauceCon 2021, April.
More details: https://www.eviltester.com/conference/saucecon2021_conference/
Security Testing is a highly technical set of skills, covering a wide domain of knowledge that can take a long time to learn and gain proficiency. We already have enough to learn with Software Testing and even more when we add in Automating. So are there any simple ways to increase the scope of what we already do, that provide more insight into the security of our application? Answer: Yes. And in this talk we will cover practical steps, dos and don’ts to add some Security focus fast, without spending years learning how to Hack applications.
# Automating Pragmatically
Testival Meetup 20190604
## Alan Richardson
- EvilTester.com
- @EvilTester
- compendiumdev.co.uk
- digitalonlinetactics.com
---
~~~~~~~~
Title: Automating Pragmatically
The online discussions of automating can leave me confused.
- Should you automate through the GUI?
- Should GUI automating be banned?
- Do all testers need to code? Is automating part of testing
or not?
- Do we need to automate to get a job?
In this short session Alan will discuss automating
from a pragmatic and contextual position and
share how he thinks about automating.
~~~~~~~~
Your Automated Execution Does Not Have to be FlakyAlan Richardson
This webinar is for anybody who has accepted 'flaky' test automation. Alan believes that to describe and accept your test execution as flaky is merely an excuse. In this webinar he will explore the myths of flakiness, so that you never use those excuses again!
Categories of common problems with suggested solutions.
For more information visit http://eviltester.com/flaky
Risk Mitigation Using Exploratory and Technical Testing - QASymphony Webinar ...Alan Richardson
A Webinar on Risk Analysis and Management, Exploratory Testing, and Technical Testing.
I want to get across the model that I have for risks, which is that risks are “beliefs” and a result of our beliefs. We believe some things will go wrong more than others. And because our beliefs are limited but the range of risks is not, we need to somehow go beyond our beliefs and look at tools and processes for doing that.
Also we know that risk is important for testing. What I want to do in this talk is present risk as the underpinning and driving force behind everything we do in testing.
You can use risk to justify the stuff that you do as a tester. And you can use risk to derive your test scope as well as your test process.
My aim here is to tell you that I learned to work with Agility rather than work with the Agile Rituals and Definitions. And I learned to trust that working with Agility trumps Rituals and Definitions the hard way. Because sticking to rituals and definitions led to rigidity, rather than agility.
And then "What does testing look like when you adopt that mindset?"
In this presentation you will short cut your learning on the topic of Agility, so you understand "What does testing look like when you adopt an Agility mindset?". Applying this mind set naturally leads to incorporating exploratory testing, technical testing, automated execution, end to end testing and risk. Adopting this mindset allows you to fit into any Agile Software Development project and create a customized testing approach that works.
Keynote at the internal Rabobank Testing Conference on Feb 15th 2018 in Utrecht.
https://www.compendiumdev.co.uk/page/rabobank201802
The slides for the Oredev 2014 talk "confessions of an accidental Security Tester" - describing the various approaches and bad habits that I use, which allow me to stumble on to security problems.
Technical and Testing Challenges: Using the "Protect The Square" GameAlan Richardson
How good are your Technical Testing in the Browser and JavaScript skills? Put them to the test with the "Protect The Square" game.
https://www.compendiumdev.co.uk/games/buggygames/protect_the_square/protect_the_square.html
What does Technical Testing mean? For Alan, it means going beyond requirements and using Technical Information about the implementation and an understanding of the technologies used in the building of the system to add to the risk profile and use to help derive test approaches. Using Web Testing as an example we explain how approaching testing from a technical perspective changes how you view the system and how you test. Also explained, how a technical understanding leads to a different use of tooling an automation. This webinar presented 1st April 2015 to Tabara De Testare
Slides for Agile Testers Conference 2018
Technology Based Testing by Alan Richardson
What do you learn if you want to test 'beyond the acceptance criteria'? Technical risk based testing can help. In this case I'm going to use the phrase Technical Testing to cover: "identify technology based risks to drive testing". This thought process can help us make informed decisions about the scope of exploratory testing we will carry out. It also helps focus your studies on the technical knowledge appropriate for the project you are testing.
## Blurb
This requires:
- understanding of the technology
- risk identification
- tools applicable to the technology
This presentation will use a simple example to demonstrate that:
- Even simple technology can pose risk
- Combining simple technology can increase risk
- Understanding technology allows us to evaluate risk
* http://www.eviltester.com
* http://www.compendiumdev.co.uk
* https://twitter.com/eviltester
Black Ops Testing Workshop from Agile Testing Days 2014Alan Richardson
At Agile Testing Days 2014. Steve Green, Tony Bruce and Alan Richardson hosted a double track Black Ops Testing workshop, where Redmine was the target application.
Find out more about the Black Ops Testing Team: http://blackopstesting.com/page/about.html
Joy of Coding Conference 2019 slides - Alan RichardsonAlan Richardson
Adventures in Testing, Programming, Teaching, Automating and Marketing
When you already know how to code, it's easy to forget how hard some of that learning was... until you have to teach people. And if all you've ever built are applications, you don't know really know the nuances of writing code to automate them. And if you've written the code but never had to market the applications then you've not really experienced the full joy of coding.
In this presentation Alan will revisit many of his past projects to identify lessons learned. Lessons from: writing commercial and open source tools, multi-user adventure games, REST APIs, test automation, automating applications to make them do things they are not supposed to do, and coding for technical marketing.
Some lessons we will learn:
* The 'install' is the hardest part
* Writing frameworks is too much fun and should be banned
* Applications are just "code calling other libraries"
* Writing a Text Adventure s the most fun and educational thing you'll ever code
* The Dangers of knowing how to code
We will also learn the dangers of knowing how to code and discover how our coding skills can give us an edge, in business and online live in general, if we choose to harness our skills to improve our daily experiences.
Software Testing Terms Defined. Answering the FAQ "What is Regression Testing?"
- What is Regression Testing?
- How to do Regression Testing?
- Why do we do Regression Testing?
- How to re-think Regression Testing in terms of Risk?
Slides from the Selenium Clinic Tutorial from Eurostar 2012 hosted by Simon Stewart and Alan Richardson. The tutorial was awarded "Best Tutorial" at the conference.
The reference slides were excerpted from Alan Richardson's online WebDriver course hosted at Udemy.
http://www.udemy.com/selenium-2-webdriver-basics-with-java/
Test Automation Day 2015 Keynote Alan Richardson - Practical Lessons Learned ...Alan Richardson
Practical Lessons Learned Automating in Testing.
Let us forget theory for a moment, and concentrate on the practice of automation. Alan will describe lessons learned from both success and failure;
as a tester, an automator, and a manager. But more importantly, you will discover how to apply these lessons and improve your automation.
Learn how to stay focused, how to experiment and still add value, how to manage even if you cannot code, and more...
Re-thinking Test Automation and Test Process Modelling (in pictures)Alan Richardson
- Why do we talk about Test Automation the way we do?
- Why do we talk about 100% Test Automation?
- How do we model automation as part of our Test Process?
- How does Testing provide information?
- Why was a Waterfall Test Process Different from an Agile Process?
- Why, in reality, both processes are fundamentally the same.
- How we modelled "Test Automation" incorrectly, and an alternative way to model it.
Read the associated blog post at http://blog.eviltester.com/2017/09/rethinking-test-process-automation-modelling.html
New Model Testing: A New Test Process and ToolTEST Huddle
In this webinar, Paul described his experiences of building and using a bot for paired testing and also propose a new test process suitable for both high integrity and agile environments. His bot – codenamed System Surveyor – builds a model of the system as you explore and captures test ideas, risks and questions and generates structured test documentation as a by-product.
Brief introduction to Session-Based Test Management and to how Exploratory Testing is understood and approached under the influence of the Context-Driven Testing movement.
Michael Bolton - Two Futures of Software TestingTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Two Futures of Software Testing by Michael Bolton. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
UserTesting 2016 webinar: Research to inform product design in Agile environm...Steve Fadden
Designing in agile environments demands many decisions be made in short periods of time. Informing these decisions with formative research enhances our understanding what we’re building, from the viability of concepts, to the effectiveness of designs, to the ultimate success of our solutions.
Eric Proegler Early Performance Testing from CAST2014Eric Proegler
Development and deployment contexts have changed considerably over the last decade. The discipline of performance testing has had difficulty keeping up with modern testing principles and software development and deployment processes.
Most people still see performance testing as a single experiment, run against a completely assembled, code-frozen, production-resourced system, with the "accuracy" of simulation and environment considered critical to the value of the data the test provides.
But what can we do to provide actionable and timely information about performance and reliability when the software is not complete, when the system is not yet assembled, or when the software will be deployed in more than one environment?
Eric deconstructs “realism” in performance simulation, talks about performance testing more cheaply to test more often, and suggest strategies and techniques to get there. He will share findings from WOPR22, where performance testers from around the world came together in May 2014 to discuss this theme in a peer workshop.
Many companies are looking for "DevOps'' in many forms, but what kind of skills or experiences are actually needed? I’ll debunk some of the myths surrounding what recruiters or internet lurkers might tell you and find out if you might actually have an aptitude for Site Reliability or Infrastructure Engineering. If so, what might be good knowledge areas to get started with? And if learning leads to an interview, what might that look like?
Presentation for Harvard's ABCD Technology in Education group:
The Institute for Quantitative Social Science (IQSS) is a unique entity at Harvard - it combines research, software development, and specialized services to provide innovative solutions to research and scholarship problems at Harvard and beyond. I will talk about the software projects that IQSS is currently working on (Dataverse, Zelig, Consilience, and OpenScholar), including the research and development processes, the benefits provided to the Harvard community, and the impacts on research and scholarship.
these are mainly the IT questions mainly asked by IT companies or organization when. employing workers its a review of how people can answer and what they should be prepared for in such a situation.
The recording in https://eviltester.com/talks has:
- longer practice session recording
- live recording - local recording better quality
- 8 bonus recordings with an extra hour of material
- will automation take over
- impact of buzzwords
- how to cope with trends
- contextual problem solving
- information about the references
- exercises
- behind the scenes look at how the talk was prepared and tools used
- transcripts
- subtitles
Programming katas for Software Testers - CounterStringsAlan Richardson
What would be suitable Code Katas for people wanting to learn how to code to support their testing?
CounterStrings
- `*3*5*7*9*12*15*`
A CounterString is a string like this `*3*5*7*9*12*15*` where the `*` represent the position in the string of the number immediately proceeding it. This is a 15 character CounterString.
These are useful because if you paste them into a field, and are truncated then it is easy to see what they were truncated to, it is as James Bach describes it, self documenting test data.
https://www.eviltester.com/blog/eviltester/2019-02-27-programming-katas-for-testers/
What is Shift Left Testing? Do you need to use that term to improve your Software Testing and Development process? I don't think so.
- why I don't use the term Shift Left
- Explanation of what Shift Left means when people use it
- Explanation of what Shift Left might mean when people hear it
- How to Shift Left incorrectly
- How to improve your test process without using the phrase Shift Left.
Hire me for consultancy and buy my online books and training at:
- https://compendiumdev.co.uk
- http://eviltester.com
- http://seleniumsimplified.com
- http://javafortesters.com
Have you ever wished that you had a worked example of how to test a REST API?
Not just automate the API, but how to interact with it with command line tools, and GUI tools to support your manual interactive testing. And then take your testing forward into automating the API?
That's what this book provides.
Read the 74 page sample and find out more information on the book page.
https://www.compendiumdev.co.uk/page/tracksrestapibook
The full book has over 200 pages of actual hands on case study information that can improve your testing and automating of REST API based applications.
TDD - Test Driven Development - Java JUnit FizzBuzzAlan Richardson
A short example Test Driven Development session where I code FizzBuzz.
FizzBuzz is often used as a programming interview question and as a Kata for practicing your coding.
The GitHub code repository with the Java code for this exercise is available at:
https://github.com/eviltester/fizzbuzz
Read the blog post for the video:
http://blog.eviltester.com/2018/03/tdd-test-driven-development-java-junit.html
What is Testability vs Automatability? How to improve your Software Testing.Alan Richardson
Testability is different from Automatability.
- Testability - does the application have features that make it easier for a human to test?
- Automatizability (Automatability) - does the application have features that make it easier to control and interrogate by another application.
You will learn:
- What is Testability?
- What is automatability?
- What is automatizability?
- Adding testability features can introduce risk.
- Features that aid automated execution, can overlap with features that aid testing, but they are not the same.
A Common Sense Guide to Agile Development and Testing that might just change your Agile approach forever.
Answering the 9 most common questions asked about Agile Testing:
- What is Agile Testing?
- Do we still need testers in Agile?
- What is an Agile Tester?
- What does a Software Tester Actually Do?
- Should we automate our testing?
- What tools should we use for our Agile Testing?
- How Much Should we Automate?
- How can we automate and still finish the sprint?
- How can we finish all our testing in the sprint?
A high quality download of the 9 points as a free "Print out and Keep" Poster is available at http://eviltester.com/agile
The Evil Tester Show - Episode 001 Halloween 2017Alan Richardson
The Evil Tester Show - Episode 001 Halloween 2017
## Halloween Special 2017
## Alan Richardson
- Houdini
- Charles Fort
- Ghost Hunting
- Unconventional Influences
http://eviltester.com/show/001-halloween-2017/
---
# _TLDR; The world needs a new Testing Podcast, so I created one_
---
# We are in the Uncertainty Business.
We find and investigate anomalous Phenomena
## Anomalous - "deviating from what is standard, normal, or expected."
We are part of a long tradition of Anomalous Phenomena seekers.
---
# The Podcast
- [Audio]
https://eviltester.podbean.com/e/the-evil-tester-show-episode-001-halloween-special-2017/
- [Video]
https://youtu.be/TLMtOM0FXRA
- [Show Notes]
http://eviltester.com/show/001-halloween-2017/
Simple ways to add and work with a `.jar` file in your local maven setupAlan Richardson
TL;DR Hack - add as a library in IntelliJ project. Tactic - add as system scope in maven. Tactic/Strategic - install locally to .m2. Strategic - use a repository management tool, publish to maven central
Sometimes you want to work with a jar file that isn't hosted in maven central.
It might be a 3rd party jar, it might be one that you have written.
Regardless.
You have a lot of options for this. The approaches that I have used:
- add .jar files as an IntelliJ project dependency
- install it locally to your .m2 repository
- add it to your project as a system scoped file
- use a repository management tool like Nexus or Archiva
- publish the dependency to maven central
Learning in Public - A How to Speak in Public WorkshopAlan Richardson
Glossophobia, the fear of public speaking, usually ranks pretty high on surveys of 'what people fear'. And for good reason. We've all attended conferences where the keynote speakers were seriously injured after being hit by a torrent of rolled up feedback forms, or speakers were left bleeding from a rain of plastic name badges thrown Shuriken-like by the Ninja trained attendees.
You can learn to avoid these outcomes, and when you do, you gain a skill that will win you recognition, improve your job prospects and allow you to travel the world talking to fellow testers.
In this workshop Alan will provide hints and tips for improving your public speaking. Sharing, from experience, what works for him, and discuss some conventional wisdom on public speaking. Alan will also share a few secrets, and unconventional exercises that he uses to prepare.
Public speaking is a skill we have to learn in public, but it is a skill, it is learn-able, and it is a skill that you can learn.
Read more in the supporting blog post:
http://blog.eviltester.com/2017/09/overcome-imposter-syndrome-public-speaking.html
How to Practise to Remove Fear of Public SpeakingAlan Richardson
Tips on how to overcome fear of public speaking:
- the 'fear' is a learned response, it is not innate
- recognise that it is not fear, it is excitement
- channel the excitement into energy to boost your talk
- practice with different styles of presentation
- record yourself practicing
- practice out loud, as well as in your head.
Speaking in public is a skill, that you can develop if you care enough about the message that you want to deliver. It is simply practice, and you can do that.
FAQ - why does my code throw a null pointer exception - common reason #1 Rede...Alan Richardson
A common reason for Null Pointer Exceptions in Java is a variable redeclaration instead of instantiation. Learn what that means, how to avoid it, and how to spot it, in this presentation.
Read the full blog post: http://testerhq.com/post/blogs/javafortesters/2017-08-29-faq-null-pointer-exception/
Visit my Java Web Site: http://javafortesters.com
---
# FAQ - why does my code throw a null pointer exception - common reason #1 Redeclaration
- Using `@BeforeClass` or `@Before` can setup data for use in tests
- Any 'variables' we instantiate need to be 'fields' rather than variables
- We want to instantiate them in the setup method rather than redeclare them
---
# Example of the Problem
I know I will use an `Adder` in my test so I create it as a field:
~~~~~~~~
public class WhyCodeThrowsNullPointerExceptionTest {
Adder adder;
~~~~~~~~
I don't want to re-instantiate it each time so I make an `@BeforeClass` method to instantiate it:
~~~~~~~~
@BeforeClass
public static void setupAdder(){
Adder adder = new Adder();
}
~~~~~~~~
**Warning: Error in the above code**
---
# Semantic Error
I just made a Semantic coding error. This won't be caught by a compiler, but it will cause my `@Test` to fail with a Null Pointer Exception.
In the setup method I really wanted to assign a value to the field, instead I created an new variable with the same name.
# In General
- Try to write one test at a time so that if you have a problem it is easier to identify where the problem is
- Try to write working isolated tests and then refactor to a more general solution when you need it - that way, you know it was working, so you just have to work backwards to find out what went wrong
- Try to use automated IDE refactoring rather than move code around manually
- Use the IDE syntax highlighting to help spot any issues
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Devfest 2019-slides
1. A Guide to Testing Web Applications
DevFest 14th Dec 2019 Bishkek
— Alan Richardson
— eviltester.com/conference/
devfestbishkek2019_conference
— EvilTester.com
— @EvilTester
— CompendiumDev.co.uk
@EvilTester 1
2. Have you ever wondered how other people test
applications? Not in theory, but in practice? What
thought processes are used? How did they model
the application? What tools were used? How did
they track the testing? That's what this talk is all
about. This talk will be based on a short Case Study
of testing an open source web application. Why
open source? Because then there is no commercial
confidentiality about the process, tools or thought
processes.
@EvilTester 2
3. Alan will explain his thought processes, coverage,
approaches, tools used, risks identified and results
found. And generalise from this into reusable
models and principles that can be applied to your
testing. This covers the What?, and the Why? of
practical exploratory web testing.
@EvilTester 3
4. A 'micro' case study
— Short Case Study
— about a 'day' of Testing
— Lessons and observations are extrapolated to a
'macro' level
— not all Testing Approaches
— lessons mentioned, have context
@EvilTester 4
8. Lesson
— Normally we don't get to choose
— Allocate staff based on skills and experience,
— If we don't understand the technology
— we need to manage that as a risk
@EvilTester 8
11. Knowledge Constraints
— My Tech knowledge of Docker
— Lower Docker knowledge than VM
— Know how to configure and access DB on VM
— Chosen VM https://www.turnkeylinux.org/tracks
— Issue... old version of tracks
— not a concern for this talk
@EvilTester 11
12. Lesson Environment Impacts Testing
— Do staff have permissions?
— to observe, interrogate, manipulate?
— Does environment match deployment?
— Does release process match live?
— Version management
— same as live?
— can do upgrades?
@EvilTester 12
14. Environment Ready
— Installed VM
— Login as Admin
— Is tracks ready?
— Don't know
— No health check page
— no status page
@EvilTester 14
15. Lesson
— Automated Execution helps derisk deployment
— Otherwise first 'test' sessions are 'health check'
— Risk of wasted time but finding out too late
@EvilTester 15
16. Health Check
— CRUD
— Create, Read, Update, Delete
— Main Areas
— Keep it simple, create a todo
@EvilTester 16
19. Lessons
— All testing is exploratory, observation
— Exploration boosts (and is constrained by)
technical knowledge of the app
— Time Box Testing
— Need screenshot tools
— gather evidence
— Notes Prior (Plan) (AIMs)
— More Notes During (Actual)
@EvilTester 19
20. How do I know what to test?
— Requirements
— Stories
— Defects
— Change Requests
— Commits
@EvilTester 20
21. Planning Session
— Making the decision about what to test
— Planning: exploratory process of weighing up
factors, asking questions and making decisions
— Tacked and planned like any other activity
— do not go too deep
— working with unknowns
— No plan - lose an opportunity
— to effectively target our testing.
@EvilTester 21
22. # Planning session
20191126 - 13:00
- release notes
- commits?
- defects?
- general functionality?
- assume team tested, add value by going 'holistic'
- pick release note item and go beyond acceptance criteria
"You can now change the state of a context to closed"
Questions:
- What is a Context?
- What is Context State?
- What states are there?
- Is it a state machine?
- Are all states equally valid?
- Are there transition rules?
20191126 - 13:10
@EvilTester 22
23. My Chosen Scope
From https://www.getontracks.org/news/
comments/release-2.3.0/
"You can now change the state of a context to
closed"
@EvilTester 23
24. I am going to
— Test Holistically
— System rather than Story
— Assume other testing has been performed
@EvilTester 24
25. When I start testing, I don't know what that means
yet.
I have to explore to find out.
@EvilTester 25
28. Recon Session Findings
— Actions added to context
— context seems to move between state in any
order (TODO: cover state table)
— context constraints
— Not close if open actions
— not add open actions to closed
— But, can amend actions
— to be on closed contexts
@EvilTester 28
29. Recon Session Findings
— all contexts are shown in auto complete drop
down regardless of state (order controlled by
Organize view)
— seems to be XHR updates
— but need to view traffic to be sure
@EvilTester 29
30. Lessons Learned
— Functional Testing is Functional Testing
— Architecture & technology -> observation ability
— My modelling and assessment of findings is
limited by my ability to observe
@EvilTester 30
31. Tooling Lessons Learned
— Selecting Tooling is an evolving process based on
the needs of the testing and the application.
— I don't 'start' with tools, I iterate towards tool
usage as required.
@EvilTester 31
32. Debrief Session
— Look at what we've done
— Working exploratory means I have to 'step
back' periodically
— Collate Notes
— Identify Issues, Todos, Defects
— Feed into Planning Session
— Refine/Formalise Models
— Modelling Session if 'large'
@EvilTester 32
34. Lessons Learned
— I rely on linear text as much as possible
— text files, markdown, screenshots in /images
— formatting, aesthetics, structure - later
— essential when working on own
— can act as a 'status' report
@EvilTester 34
35. Modelling Session
— "prep a more comprehensive coverage approach
because I didn't 'test' this I did an initial
exploration during recon but the coverage wasn't
documented or thought through well enough."
@EvilTester 35
37. Lessons Learned
— Modelling is a key skill
— If models are ambiguous, we don't understand
— Simple concepts expand into complex models
— Complex models -> high number of combinations
@EvilTester 37
38. Lessons Learned
— Informal modelling can support exploration
— Don't leap to a diagrammer tool
— pen, paper, smartphone camera
— whiteboard -> smartphone camera
— tech solution smartpen -> Evernote
@EvilTester 38
39. Led to a Quick Planning Session
@EvilTester 39
40. Lessons Learned
— Modelling and Planning are related
— Planning is prioritising parts of model coverage
— Plan only as far as you need to to support the next
session
@EvilTester 40
41. Coverage Session
— Actual Testing
— Mindmap in advance to expand the plan
— kept notes as tested
— missing items from the 'plan'
— added them into my notes
— minimise additional exploration
— focus on 'coverage'
— follow later
@EvilTester 41
45. Lessons Learned
— Gathered more evidence, than during recon
— Expand plans as required
— FreePlane because of build in scripting
— Plan will not be complete
— awarenes of new opportunities
— follow (inscope), or defer
@EvilTester 45
46. (Technical) Exploratory Session
"Aim: Investigate the traffic mechanisms for creating
actions and changing state on context - how far can
we push this?"
@EvilTester 46
47. Technical Exploratory Session Difference From
Exploratory Session
— Observation at a technology level
— e.g. traffic, params, headers
— Manipulation at a technical level
— e.g. resend requests, DOM manipulation,
bypass GUI, Tools mandatory
— Technical observation -> new ideas
@EvilTester 47
50. Basic Process
— Work in GUI. Observe impact via HTTP (I can see
if JS or Server validation used)
— Interrogate HTTP Traffic, source of new ideas
— Manipulate (resend) HTTP Traffic
— Repeat HTTP results via DOM manipulation
— keep an eye on time and scope
@EvilTester 50
54. What I Found
— ZAProxy - session only saveable to Documents,
not sub folder
— Tracks no error message when incorrect status
used for context, value ignored, 200 OK
— suggests Backend 'trusts' the GUI
— investigate error handling
@EvilTester 54
55. What I Found
— Tracks XHR message responses which are error
messages are 200 status rather than a 4xx status
— 'interesting things about tracks' investigate later
— _method=put field differs HTTP verb,
— HTML and JS response to XHR
— "default context" field
@EvilTester 55
56. What I Found
— API vs GUI mismatch? a forms API and a JSON
API
— limited observation - did not include DB tools, had
to use System exports to check data.
@EvilTester 56
57. Tooling Benefits
— dev tools for DOM Inspection and manipulation
— automatically record HTTP session
— proxy HTTP Observation, Inspection &
Manipulation
— easy replay for requests
— easier traffic inspection
@EvilTester 57
58. Lessons Learned
— Tools can impact testing - lost time tool setup
— More information -> more risk of going off charter
or off time
— Test Ideas that are not possible without tooling
— e.g. duplicate params,
— re-order params
@EvilTester 58
59. Lessons Learned
— Automated record keeping is a 'backup', I still
copy HTTP request and URLs into my notes
@EvilTester 59
60. What Did I Not Do/Show? - Automating
— likely factored into "Story" Testing
— start tactical, make it work, refactor to
abstraction layers, become more strategic as
required
— use default tools
— get value out of quickly
— new tools as required
@EvilTester 60
61. What would I do next (for current scope)?
— API
— repeat GUI and HTTP at API
— assumption - different routing code
— possibly different back end code
— DataBase
— observation and interrogation
@EvilTester 61
64. Testing
— 241 minutes (4 hours) spent Testing
— 93 minutes (1.5 hours) hands on
— 2-3+ hours additional admin if 'real' project
Much of Testing goes unnoticed and untracked.
Note taking
- key to gathering evidence
- and making testing visible.
@EvilTester 64
65. Tooling comes from:
— need
— observation
— interrogation
— manipulation
— admin, evidence
— technology
— can aid/hinder tooling
@EvilTester 65
66. We explore at all points in our testing
— The more we are focussed on coverage
— the more we constrain our exploration to the
Coverage.
— The more we are focussed on Exploration,
— the more our coverage has to be reverse
engineered from:
— our logs,
— evidence, and notes.
@EvilTester 66
67. About Alan Richardson
— EvilTester.com
— CompendiumDev.co.uk
— Talotics.com
— @EvilTester
books, youtube, online
training, patreon,
blog, etc.
@EvilTester 67
69. Section: Edited for length
The slides that follow were not used in the main
presentation due to timing constraints.
@EvilTester 69
70. Recon Session Findings
— open screens to not update with state unless
refreshed e.g. home, when open and move
context to new state, does not show new state -
can this be used to abuse state? e.g. drag todo as
sub todo on a todo on a closed state and closed
todo that has not refreshed yet?
@EvilTester 70
71. If this was a real project: Planning
— add this as notes in a Jira task
— add this to kanban board
— discuss with team
— possibly expand further and write up, if not doing
immediately and model is complex
@EvilTester 71
72. Tooling Lessons Learned
— tooling support for observation, interrogation.
manipulation appropriate for scope and aim
i.e. I didn't use any extra tools for testing, but I did
for planning
@EvilTester 72
74. In-sprint testing
— constrained to 'work done' scope
— artificially constrained - no error handling, no
security etc.
— need to track these 'omissions' as coverage gaps
for later testing
— often testing 'bits' of stories
@EvilTester 74
75. Stories
— Complete chunk of work
— Sometimes span sprints
— 'not complete' but still requiring testing
— Acceptance Criteria testing needs to be revisited
— Automated Execution to cover criteria can help
— Going 'beyond' the story often needs to wait till
story is 'complete'
— Stories often tested in isolation - Automated
@EvilTester 75
76. System Testing is a vague term
— I'm meaning 'system as a whole'
— Stories in combination i.e. functional flows that
span stories
— Stories in different orders
— Going beyond the story Acceptance Criteria
— Often 'neglected' in Agile Projects
@EvilTester 76
79. Is it a real issue?
I cannot create an Action on a closed context. And I
can not change state of a Context to closed when it
has open Actions. But I can amend an Action to
assign it to a closed context.
— Don't know
— On a real project I would not have pursued it so
far, without finding that out first.
— It is the type of issue that would be found via
@EvilTester 79
80. If it is an issue, when would it be found?
— Structured/Traditional testing
— hopefully during a Q&A, clarification when
writing 'cases/conditions' from the
requirements
— but it might take days/weeks to get to that
point depending on process e.g.
— analyse all test conditions and ideas against
requirements and remove ambiguity - might
@EvilTester 80
81. If it is an issue, when would it be found?
— Agile
— hopefully during a story understanding or
development session
— if not then, would it be found from following
acceptance criteria?
— probably not, it would be found in the
exploratory testing 'around' the story
Note: questioning and clarifying during analysis is an
@EvilTester 81
82. Projects make decisions about defects
In this case it seems like a mismatch in states. The
constraint is too easy to bypass if it is an important
constraint e.g.
Instead of creating the Action on a closed context. I
create it on an Active context, then amend the
action to be on a closed context.
But, the impact in terms of system reliability,
@EvilTester 82
83. What Drives Tooling?
— Observation
— Interrogation
— Manipulation
— Admin
— process support, Evidence Gathering, Note
Taking, etc.
@EvilTester 83
84. Observation
— seeing in real time
— possibility to quick catch issues
— supports monitoring e.g. alerts on errors in logs
@EvilTester 84
85. Interrogation
— deep dive 'after'
— could be 'seconds' after, but still 'after'
— more data than can be absorbed quickly
@EvilTester 85
86. Manipulation
— ability to change the system -defaults, static data
— repeat/amend requests
— use the API
@EvilTester 86
89. For Web
I can test 'blind' and trust the GUI or I can add
tooling
— Dev Tools
— Proxy Tools
Both allow observation and interrogation. Proxy tool
offers scope for easier longer term interrogation and
manipulation for replay.
@EvilTester 89
90. For Technology
Find tools that allow:
— Observation
— Interrogation
— Manipulation
— Support Scripting, extension, apis
— Record Evidence
@EvilTester 90
91. What Did I Not Do/Show? - Admin
— Admin Sessions
— Raising Defects
— Use existing logs and evidence where possible.
— Use logs to help recreate
— Clarifying Questions
— Communication
— Uploading Logs to Time Tracker
@EvilTester 91
92. Bugs
Testing is not all about bugs.
— The most visible output, are bugs.
— Many projects only care about bugs.
— Testing is a process of deliberate exploration and
coverage.
— Finding bugs is a side-effect of this process.
@EvilTester 92
93. What do I test?
— Requirements, Stories
— What a User wants to achieve
— Why they want it
— How it has been implemented
— Functional Description
— Technical Risk
— Decisions made during development
— Priorities and Agreed Risk Areas
@EvilTester 93
94. Taking a System and Beyond View
— assume that testing has been done on the stories
and all stories complete in the sprint and release.
— take a System view and ignore specific 'sprint',
'story' or 'release' requirements. View
functionality as a whole.
— will not review automated execution. Risk of
duplicated effort and scope.
@EvilTester 94
95. Q: How does a manager
add value on an Agile
Project?
@EvilTester 95
96. A: By looking at the project holistically.
— Teams are focussed on the functionality that they
are working on.
— Teams often don't have time to look beyond.
— Teams are often not encouraged to think beyond.
Process risk, can be used to drive testing. Look
where others don't.
@EvilTester 96