The document outlines 7 thinking tools to help with rapid testing:
1. Landscaper - Do a survey to understand the big picture
2. Persona map - Map out who uses what
3. Scope map - Map out user expectations
4. Interaction map - Map what may affect what
5. Environment map - Map test environments
6. Scenario creator - Create test scenarios
7. Dashboard - Stop, analyze, and refine
These tools are part of an immersive session testing approach using reconnaissance, exploration, and rest/recovery phases to facilitate rapid yet thorough scientific exploration. A related SaaS tool called doSmartQA will offer these tools and interested users can email the founder for
The recording in https://eviltester.com/talks has:
- longer practice session recording
- live recording - local recording better quality
- 8 bonus recordings with an extra hour of material
- will automation take over
- impact of buzzwords
- how to cope with trends
- contextual problem solving
- information about the references
- exercises
- behind the scenes look at how the talk was prepared and tools used
- transcripts
- subtitles
My aim here is to tell you that I learned to work with Agility rather than work with the Agile Rituals and Definitions. And I learned to trust that working with Agility trumps Rituals and Definitions the hard way. Because sticking to rituals and definitions led to rigidity, rather than agility.
And then "What does testing look like when you adopt that mindset?"
In this presentation you will short cut your learning on the topic of Agility, so you understand "What does testing look like when you adopt an Agility mindset?". Applying this mind set naturally leads to incorporating exploratory testing, technical testing, automated execution, end to end testing and risk. Adopting this mindset allows you to fit into any Agile Software Development project and create a customized testing approach that works.
Keynote at the internal Rabobank Testing Conference on Feb 15th 2018 in Utrecht.
https://www.compendiumdev.co.uk/page/rabobank201802
Your Automated Execution Does Not Have to be FlakyAlan Richardson
This webinar is for anybody who has accepted 'flaky' test automation. Alan believes that to describe and accept your test execution as flaky is merely an excuse. In this webinar he will explore the myths of flakiness, so that you never use those excuses again!
Categories of common problems with suggested solutions.
For more information visit http://eviltester.com/flaky
Software Testing Terms Defined. Answering the FAQ "What is Regression Testing?"
- What is Regression Testing?
- How to do Regression Testing?
- Why do we do Regression Testing?
- How to re-think Regression Testing in terms of Risk?
Open-Source Software's Responsibility to ScienceJoel Nothman
Presented as an invited talk to the Workshop for Natural Language Processing Open Source Software (NLP-OSS), co-located with ACL 2018 (Melbourne, Australia, July 2018).
Chat bots been have popping up everywhere for silly things, but what if they can help us make the world more safe and secure? The work of designing secure systems often involves iterating over designs with a team but what if you don’t have a team? What if you could iterate over system design and analysis in a chat window and have a design document with safety constraints as the end product? This talk will present an original chat bot that will do just that
The recording in https://eviltester.com/talks has:
- longer practice session recording
- live recording - local recording better quality
- 8 bonus recordings with an extra hour of material
- will automation take over
- impact of buzzwords
- how to cope with trends
- contextual problem solving
- information about the references
- exercises
- behind the scenes look at how the talk was prepared and tools used
- transcripts
- subtitles
My aim here is to tell you that I learned to work with Agility rather than work with the Agile Rituals and Definitions. And I learned to trust that working with Agility trumps Rituals and Definitions the hard way. Because sticking to rituals and definitions led to rigidity, rather than agility.
And then "What does testing look like when you adopt that mindset?"
In this presentation you will short cut your learning on the topic of Agility, so you understand "What does testing look like when you adopt an Agility mindset?". Applying this mind set naturally leads to incorporating exploratory testing, technical testing, automated execution, end to end testing and risk. Adopting this mindset allows you to fit into any Agile Software Development project and create a customized testing approach that works.
Keynote at the internal Rabobank Testing Conference on Feb 15th 2018 in Utrecht.
https://www.compendiumdev.co.uk/page/rabobank201802
Your Automated Execution Does Not Have to be FlakyAlan Richardson
This webinar is for anybody who has accepted 'flaky' test automation. Alan believes that to describe and accept your test execution as flaky is merely an excuse. In this webinar he will explore the myths of flakiness, so that you never use those excuses again!
Categories of common problems with suggested solutions.
For more information visit http://eviltester.com/flaky
Software Testing Terms Defined. Answering the FAQ "What is Regression Testing?"
- What is Regression Testing?
- How to do Regression Testing?
- Why do we do Regression Testing?
- How to re-think Regression Testing in terms of Risk?
Open-Source Software's Responsibility to ScienceJoel Nothman
Presented as an invited talk to the Workshop for Natural Language Processing Open Source Software (NLP-OSS), co-located with ACL 2018 (Melbourne, Australia, July 2018).
Chat bots been have popping up everywhere for silly things, but what if they can help us make the world more safe and secure? The work of designing secure systems often involves iterating over designs with a team but what if you don’t have a team? What if you could iterate over system design and analysis in a chat window and have a design document with safety constraints as the end product? This talk will present an original chat bot that will do just that
What is Shift Left Testing? Do you need to use that term to improve your Software Testing and Development process? I don't think so.
- why I don't use the term Shift Left
- Explanation of what Shift Left means when people use it
- Explanation of what Shift Left might mean when people hear it
- How to Shift Left incorrectly
- How to improve your test process without using the phrase Shift Left.
Hire me for consultancy and buy my online books and training at:
- https://compendiumdev.co.uk
- http://eviltester.com
- http://seleniumsimplified.com
- http://javafortesters.com
Human-centric Software Development ToolsGail Murphy
What characteristics research into software development tools? This talk explores how research can help understand why some tools are effective and some are not and can help drive to the development of more effective tools for software developers.
Rita Bush - How to Use Games to Mitigate Cognitive Bias in AnalysisSeriousGamesAssoc
Rita Bush, Office of the Director of National Intelligence
This presentation was given at the 2017 Serious Play Conference, hosted by the George Mason University - Virginia Serious Play Institute.
This presentation describes a four-year, multi-team experimental research program designed to study the effectiveness of games as a training tool for teaching about and mitigating cognitive bias. Game designs were iterated over multiple development cycles, informed by the results of both playtesting and formal experiments. The research showed that it is possible to reduce biased decision-making both immediately and long-term.
Expert System Lecture Notes Chapter 1,2,3,4,5 - Dr.J.VijiPriyaVijiPriya Jeyamani
Chapter 1 Introduction to AI
Chapter 2 Introduction to Expert Systems
Chapter 3 Knowledge Representation
Chapter 4 Inference Methods and Reasoning
Chapter 5 Expert System Design and Pattern Matching
Introduction to usability and usability testing as a discipline, followed by how to do guerilla usability testing. Presented at Duke Tech Expo April 13, 2018 with co-author Lauren Hirsh, with content from a prior collaborative presentation of hers.
UX Activities for Pet Wearable iOS Mobile AppNicole Warner
Mobile app product development for a pet wearable device. Product tracks fitness and health stats. Also, includes tracking service and remote access to dog door.
Presentation for SMU UX certification class.
Every search engineer ordinarily struggles with the task of evaluating how well a search engine is performing. Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going. The talk will describe the Rated Ranking Evaluator from a developer perspective. RRE is an open source search quality evaluation tool, that could be used for producing a set of deliverable reports and that could be integrated within a continuous integration infrastructure.
Haystack London - Search Quality Evaluation, Tools and Techniques Andrea Gazzarini
Every search engineer ordinarily struggles with the task of evaluating how well a search engine is performing. Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going. The talk will describe the Rated Ranking Evaluator from a developer perspective. RRE is an open source search quality evaluation tool, that could be used for producing a set of deliverable reports and that could be integrated within a continuous integration infrastructure.
What is Shift Left Testing? Do you need to use that term to improve your Software Testing and Development process? I don't think so.
- why I don't use the term Shift Left
- Explanation of what Shift Left means when people use it
- Explanation of what Shift Left might mean when people hear it
- How to Shift Left incorrectly
- How to improve your test process without using the phrase Shift Left.
Hire me for consultancy and buy my online books and training at:
- https://compendiumdev.co.uk
- http://eviltester.com
- http://seleniumsimplified.com
- http://javafortesters.com
Human-centric Software Development ToolsGail Murphy
What characteristics research into software development tools? This talk explores how research can help understand why some tools are effective and some are not and can help drive to the development of more effective tools for software developers.
Rita Bush - How to Use Games to Mitigate Cognitive Bias in AnalysisSeriousGamesAssoc
Rita Bush, Office of the Director of National Intelligence
This presentation was given at the 2017 Serious Play Conference, hosted by the George Mason University - Virginia Serious Play Institute.
This presentation describes a four-year, multi-team experimental research program designed to study the effectiveness of games as a training tool for teaching about and mitigating cognitive bias. Game designs were iterated over multiple development cycles, informed by the results of both playtesting and formal experiments. The research showed that it is possible to reduce biased decision-making both immediately and long-term.
Expert System Lecture Notes Chapter 1,2,3,4,5 - Dr.J.VijiPriyaVijiPriya Jeyamani
Chapter 1 Introduction to AI
Chapter 2 Introduction to Expert Systems
Chapter 3 Knowledge Representation
Chapter 4 Inference Methods and Reasoning
Chapter 5 Expert System Design and Pattern Matching
Introduction to usability and usability testing as a discipline, followed by how to do guerilla usability testing. Presented at Duke Tech Expo April 13, 2018 with co-author Lauren Hirsh, with content from a prior collaborative presentation of hers.
UX Activities for Pet Wearable iOS Mobile AppNicole Warner
Mobile app product development for a pet wearable device. Product tracks fitness and health stats. Also, includes tracking service and remote access to dog door.
Presentation for SMU UX certification class.
Every search engineer ordinarily struggles with the task of evaluating how well a search engine is performing. Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going. The talk will describe the Rated Ranking Evaluator from a developer perspective. RRE is an open source search quality evaluation tool, that could be used for producing a set of deliverable reports and that could be integrated within a continuous integration infrastructure.
Haystack London - Search Quality Evaluation, Tools and Techniques Andrea Gazzarini
Every search engineer ordinarily struggles with the task of evaluating how well a search engine is performing. Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going. The talk will describe the Rated Ranking Evaluator from a developer perspective. RRE is an open source search quality evaluation tool, that could be used for producing a set of deliverable reports and that could be integrated within a continuous integration infrastructure.
Presentation for Harvard's ABCD Technology in Education group:
The Institute for Quantitative Social Science (IQSS) is a unique entity at Harvard - it combines research, software development, and specialized services to provide innovative solutions to research and scholarship problems at Harvard and beyond. I will talk about the software projects that IQSS is currently working on (Dataverse, Zelig, Consilience, and OpenScholar), including the research and development processes, the benefits provided to the Harvard community, and the impacts on research and scholarship.
A Preliminary Field Study of Game Programming on Mobile DevicesTao Xie
Eric Anderson, Sihan Li, and Tao Xie. A Preliminary Field Study of Game Programming on Mobile Devices. Presented in Workshop on Programming for Mobile and Touch (PROMOTO 2013), Indianapolis, IN, October 2013.
Similar to Seven Thinking Tools to Test Rapidly (20)
This outlines FIVE key application scenarios of validation using doSmartQA, a smart probing assistant to test deeply & rapidly.
It facilitates rapid testing in short sessions of Recon, Explore & Recoup, based on HyBIST -
‘Hypothesis Based Immersive Session Testing’, an intellectual practice of probing.
“Despite all the testing we do, field issues do not seem to abate. Sometimes it is a few serious issues that cause us to react intensely, sometimes it is a bunch of simple issues that make us consume bandwidth. Clearly the backlog is building up, with debts to be serviced, straining capacity to deliver new ideas.”
This is what I hear from senior engineering managers of product companies. How do you go about fixing this? Well, I have seen a flurry of activity to identify root cause(s) and address them. They help to set focus, but fizzle out.
Analysing 'quality of technical debt’ to understand types of issues that leak enables practical actions, rather than jumping into the ‘reason of why’ (root cause). Smart QA it is, to do failure analytics differently, to ‘tighten the purse’.
Technical debt is indeed a serious drain on engineering capacity, forcing one to fix issues at the expense of building revenue yielding new features. Smart failure analytics visualises problems well, enabling clear actions to strengthen practice and reduce debt significantly.
If you are “choked by technical debt”, then you may find our SmartQA consulting (stagsoftware.com/smartqa) interesting, where we unshackle your practice so that you can exploit technology.
"We track a lot of metrics related to progress of development and quality every sprint, like backlogs, technical debt, velocity, task status etc. What is not very evident is the 'quality of movement' i.e. how well done, so that we create less debt as we move. How can I get a better insight of the quality of tests done and a more objective measure of product quality?"
Extrinsic metrics are easier to measure and give visibility of direction, progress, speed and external feel of product quality. Intrinsic metrics are deeper, harder to measure but can give greater insight into the quality of work. Measuring this requires a good structure and organisation of test artefacts. The benefit - a greater insight into effectiveness of outcome and therefore lower technical debt & greater acceleration, don't you think?
Metrics can be classified as measuring work progress, work quality, product quality and practice quality. Except for the first one on work progress where we have a lot of measures facilitated by project and test management tools, the others depend on test organisation and clarity of types of issues to uncover. 'Quality Levels' based on HBT (Hypothesis Based Testing) provides a strong foundation for these, enabling you to assess potential test effectiveness, judge product quality objectively and fine tune practice quality .
If you are keen on "insightful quality metrics", then you may find our SmartQA consulting (stagsoftware.com/smartqa) interesting, where we unshackle your practice so that you can see clearly and do far better.
“As we embrace faster release cycles, testing has become a bottleneck. Yes, we have embraced automation as the way forward. We have a huge regression suite and therefore a big backlog for automation, a tough balance to speed up and yet maintain the fast paced release rhythm. What can I do?”
Automated tests are great to monitor a system’s health. Rather than just use regression as the candidate for automation, key flows that signify the pulse of a system's health are superior, don’t you think? And, this won’t create a huge backlog for automation, right?
Most often I have seen automation embraced as the solution to speed up testing. Conceptually correct it is, the problem is - what makes it worth the while to automate? Automated tests have to be in sync with the product and are therefore not a one time effort.
Choosing the right ones implies, it needs to be at the level of user flow, and be a clear indicator of health. Unless test scenarios are well structured and organised, choosing the right ones will turn out to be difficult, and ultimately weigh you down. It then becomes a pursuit of catching up with automation rather than making it work for you.
The goal is not 100% automation, it really is no leakage of defects. Automated tests are really ‘checks’ that assess key paths for good health (correctness) while intelligent human tests are focused on finding issues(robustness). A harmonious balance between these two enables clean code to be delivered without being weighed down by automation.
If you are “weighed down by automation“, then you may find our SmartQA consulting (stagsoftware.com/smartqa) interesting, where we unshackle your practice so that you can exploit technology.
Inspired by how the world is handling Covid19, this slideshare lists actions taken and criteria met to contain the pandemic and correlate this to how we can deliver clean code for large scale software systems. This article focuses on the process flow and criteria for delivering clean code.
Agile and automation have been great enablers to doing tests faster. How we can accelerate further to accomplish more by doing less is the objective of this webinar.
“Left-shifting” by smart decomposition of dev testing aided by smart lightweight aids to perform rapid dev testing will be the takeaways of this webinar.
Three ideas to regression test smarter and outline THREE AIDS to do this.
AID #1: Fault propagation analyser - Figure out how what-to-retest by doing a smarter impact analysis using a scientific approach to understanding fault propagation due to change.
AID #2 : Automation analyser - Ensure scenarios are fit-to-automate so that they are easily scriptable and easily maintainable
AID #3 : Yield analyser : Figure out how much not to regress by analysing defect yields over time to understand what parts of the system have been hardened
Well, automation is an obvious choice, ensure that the scenarios are “fit enough for automation” so that you don’t end spend much effort maintaining the scripts to be in sync with every change.
Drawing inspiration from Atul Gawande's book "The checklist manifesto", T Ashok, CEO, STAG Software, explores at how we can exploit the power of checklist to delivering good quality code.
Drawing inspiration from Atul Gawande's book "The checklist manifesto", T Ashok, CEO, STAG Software, explores at how we can exploit the power of checklist to delivering good quality code.
This is the webinar recording on the topic ‘Test Case Immunity’- Optimize testing. In this webinar we have conveyed an interesting idea of measuring “Test Case Immunity” to logically assess what test cases to drop by so that we can 'do none'
This slide share contains the webinar, slides and the transcribed audio. The discussion outlines the entities to be considered for design, level based design, the optimal approach (think & prove/execute & evaluate) and finally design techniques.
Part#2 of Tri-webinar series consisting of three webinars commencing with 'How-to question to understand an user story and identify gaps', moving onto 'How-to set clear baseline' to ensure an effective strategy, and finally culminating with 'How-to design test scenarios/cases' using a scientific and disciplined approach
Part1 of Tri-webinar series consisting of three webinars commencing with 'How-to question to understand an user story and identify gaps', moving onto 'How-to set clear baseline' to ensure an effective strategy, and finally culminating with 'How-to design test scenarios/cases' using a scientific and disciplined approach
"Language shapes the way you think" was the topic of the talk presented by T Ashok, CEO STAG Software, to a group of test professionals at a Pune-based IT services and solutions provider on June 16, 2014.
HBT Innovation Series webinar presented by T Ashok, Architect-HBT and Founder & CEO, STAG Software on the topic - Deliver Superior Outcomes Using HBT Visualization Tool - on Feb 26, 2014.
This presentation on Hypothesis Based Testing (HBT) was delivered by Mr Satvik Kini, Associate Quality Manager, Suite Test Centre, SAP Labs India Pvt. Ltd at STeP-IN Forum webinar on Dec 19, 2013.
This presentation was part of the talk delivered by T Ashok Founder & CEO STAG Software at the HSTC 2013: "Think Testing" Conference on Nov 21 & 22 at Hyderabad.
STAG Software presented a webinar on Aug 21, 2013 on the topic - Improving Defect Yield - a three step approach". The webinar was hosted by T Ashok, Founder & CEO, STAG Software and Architect of HBT.
STAG’s assessment for test case potency of a cloud-based trading software helps reduce
regression test cases by 28% and regression cycle time by 40% for an award-winning B2B
e-commerce company.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
6. 4
In current times, speed is everything, right?
What can we do test quickly ?
Use tools.Automate. Right?
7. 4
In current times, speed is everything, right?
Wait a minute!
This is about execution, right?
What about prior activities?
What can we do test quickly ?
Use tools.Automate. Right?
8. 5
to answer, let us ask the basic question
what is testing after all?
13. 8
how can we do
scientific exploration rapidly?
by using tools that help us
think better and do faster
14. 9
we all know about
a variety of test tools that help us automate
system setup, execution, static analysis, report, manage…
at different phases of lifecycle to “do faster”
15. 9
we all know about
a variety of test tools that help us automate
system setup, execution, static analysis, report, manage…
at different phases of lifecycle to “do faster”
let’s examine tools that help us
“think better”
16. 10
what is the first thing you do
before you embark on an exploration?
do a survey “reconnaissance"
18. 11
Tool #1: Landscaper
do survey, understand the big picture
persona
who are the
end users
e.g ELearning system
Persona
Administrator
Student
Supervisor
19. 11
Tool #1: Landscaper
do survey, understand the big picture
persona
who are the
end users
entities
what do you
want to test
component, features
requirements, flows
e.g ELearning system
Persona
Administrator
Student
Supervisor
Feature
Create User
Upload content
Requirement
Go through lessons in courses
Take final assessment
Flow
Complete course, by taking it and
doing the final assessment
20. 11
Tool #1: Landscaper
do survey, understand the big picture
persona
who are the
end users
entities
what do you
want to test
component, features
requirements, flows
attributes
what do you
want to test for
e.g ELearning system
Persona
Administrator
Student
Supervisor
Feature
Create User
Upload content
Requirement
Go through lessons in courses
Take final assessment
Flow
Complete course, by taking it and
doing the final assessment
Migration
All course info of 2.5, 2.7, 3.0
to be ‘migrate-able’
Performance
Video streaming should
commence in a max of 2s with
500 concurrent users.
21. 11
Tool #1: Landscaper
do survey, understand the big picture
persona
who are the
end users
entities
what do you
want to test
component, features
requirements, flows
attributes
what do you
want to test for
environment
where do you
want to test on?
e.g ELearning system
Persona
Administrator
Student
Supervisor
Environment
OS Mac,Windows, Linux
Browser Firefox, Chrome, IE11
Database Mongo,MySQL,PostgreSQL
MobileOS Android,IOS
Device Laptop,Tablet,Mobile
Feature
Create User
Upload content
Requirement
Go through lessons in courses
Take final assessment
Flow
Complete course, by taking it and
doing the final assessment
Migration
All course info of 2.5, 2.7, 3.0
to be ‘migrate-able’
Performance
Video streaming should
commence in a max of 2s with
500 concurrent users.
23. 12
now that you have done the survey,
what next?
create maps
to guide you and chalk out routes
24. 13
Tool #2: Persona map
map out who uses what
who uses what
persona
who are the
end users
entities
what do you
want to test
component, features
requirements, flows
25. 13
Tool #2: Persona map
map out who uses what
who uses what
persona
who are the
end users
entities
what do you
want to test
component, features
requirements, flows
26. 14
Tool #3: Scope map
map out user’s expectations
attributes
what do you
want to test for
what-to-test-for-what
entities
what do to you
want to test
component, features
requirements.flows
27. 14
Tool #3: Scope map
map out user’s expectations
attributes
what do you
want to test for
what-to-test-for-what
entities
what do to you
want to test
component, features
requirements.flows
Migration
Security
Performance
Load
28. 15
Tool #4: Interaction map
map out what may affect what, to intelligently regress
entities
what do you
want to test
component, features
requirements, flows
F1 —> F2
F1 —> Flow3
29. 15
Tool #4: Interaction map
map out what may affect what, to intelligently regress
entities
what do you
want to test
component, features
requirements, flows
F1 —> F2
F1 —> Flow3
30. 16
Tool #5: Environment map
map out environments to test on
environment
where do you
want to test on?
Env #1
Env #2
…
31. 16
Tool #5: Environment map
map out environments to test on
environment
where do you
want to test on?
Env #1
Env #2
…
33. 17
chalk out the routes
i.e. “test design” - come up with scenarios.
now that we have the maps, what do we do next?
Now you are ready to explore.
34. 18
Tool #6: Scenario creator
create test scenarios
L4: Behaviour correctness
L1: Input correctness
L3: Structural integrity
L2: Interface correctness
L5: Flow correctness
L6: Environment correctness
L7: Attribute correctness
L8: Deployment correctness
L9: End user value
18
component test scenarios
feature test scenarios
requirement test scenarios
flow test scenarios
use techniques
use experience
be creative
learn and revise
use smart checklists
level-wise
Robust Test Design powered by HBT
(Hypothesis Based Testing)
35. 18
Tool #6: Scenario creator
create test scenarios
L4: Behaviour correctness
L1: Input correctness
L3: Structural integrity
L2: Interface correctness
L5: Flow correctness
L6: Environment correctness
L7: Attribute correctness
L8: Deployment correctness
L9: End user value
18
component test scenarios
feature test scenarios
requirement test scenarios
flow test scenarios
use techniques
use experience
be creative
learn and revise
use smart checklists
level-wise
Robust Test Design powered by HBT
(Hypothesis Based Testing)
Table 1
(L4) Func. Behavior
Close a unresolved question
Re-open a closed question
Reply to an open unresolved
question
Edit reply for a question
Table 1-1
(L1) Input validation
Null inputs
Beyond boundaries
Duplicate values
41. 23
Tool #7: Dashboard
stop, analyse and refine
23
adequacy
scenarios
good enough
progress
are we
on track?
quality
how good is
the system?
42. 23
Tool #7: Dashboard
stop, analyse and refine
23
adequacy
scenarios
good enough
progress
are we
on track?
quality
how good is
the system?
Inputs
1. Attributes considered?
2. Environ. considered?
3. Scenarios at all levels?
4. +/- distribution ok?
5. All personas covered?
Use Maps+Routes
43. 23
Tool #7: Dashboard
stop, analyse and refine
23
adequacy
scenarios
good enough
progress
are we
on track?
quality
how good is
the system?
Inputs
1. Attributes considered?
2. Environ. considered?
3. Scenarios at all levels?
4. +/- distribution ok?
5. All personas covered?
Use Maps+Routes
1. wrt attributes
2. wrt attributes
3. wrt entities
4. wrt interactions
5. wrt persona
Use Maps+Routes+Exec Info
Activities (plan vs.actual)
44. 23
Tool #7: Dashboard
stop, analyse and refine
23
adequacy
scenarios
good enough
progress
are we
on track?
quality
how good is
the system?
Inputs
1. Attributes considered?
2. Environ. considered?
3. Scenarios at all levels?
4. +/- distribution ok?
5. All personas covered?
Use Maps+Routes
1. wrt attributes
2. wrt attributes
3. wrt entities
4. wrt interactions
5. wrt persona
Use Maps+Routes+Exec Info
Activities (plan vs.actual)
1. wrt attributes
2. wrt attributes
3. wrt entities
4. wrt interactions
5. wrt persona
Use Maps+Routes+Exec Info
Outcomes
47. 25
Tool #1: Landscaper
do survey, understand the big picture
Tool #5: Environment map
map out environments to test on
Tool #2: Persona map
map out who uses what
Tool #4: Interaction map
map out what may affect what
Tool #3: Scope map
map out user’s expectations
Reconnaissance
do survey, make maps
Tool #6: Scenario creator
create test scenarios
Exploration
observe, search, learn, refine
Tool #7: Dashboard
stop, analyse and refine
Rest and recover
stop, analyse and refine
49. IMMERSIVE SESSION TESTING “IST”
Reconnaissance
Exploration
Rest & Recover
S1 S2
S3
S4
S5
done in sessions of 60-90 minsconsists of THREE phases
from STAG Software
by using
a session based approach based on Hypothesis Based Testing
50. 28
the tools outlined here and more will be
available as a SaaS tool doSmartQA shortly
we are keen to trial this with select users
email me if you are interested
‘ash at stagsoftware dot com’
51. testing is scientific exploration
7 Thinking Tools
to Test Rapidly
SmartQA Series Webinar
doSmartQA SaaS tool
email me if you are interested
‘ash at stagsoftware dot com’