Presented at https://www.onlinetestconf.com/program-spring-otc-2020/
Sometimes you’re asked to start testing in a context that is not ideal: you’ve only just joined the project, the test environment is broken, the product is migrating to a new stack, the developer has left, no-one seems quite sure what’s being done or why, and there is not much time.
Knowing where to begin and what to focus on can be difficult and so in this talk I’ll describe how I try to meet that challenge.
I’ll share a definition of testing which helps me to navigate uncertainty across contexts and decide on a starting point. I’ll catalogue tools that I use regularly such as conversation, modelling, and drawing; the rule of three, heuristics, and background knowledge; mission-setting, hypothesis generation, and comparison. I’ll show how they’ve helped me in my testing, and how I iterate over different approaches regularly to focus my testing.
The takeaways from this talk will be a distillation of hard-won, hands-on experience that has given me
* an expansive, iterative view of testing
* a comprehensive catalogue of testing tools
* the confidence to start testing anything from anywhere
When considering how automation can be useful in testing, many people think only about identifying difference from some expected behaviour, for example with unit tests.
Much of my testing is about exploration, about learning, about finding things that might matter that we haven't already thought about. In this talk, I'll discuss how I use automation to help me to do that. In particular I'll demo some ways in which I ask questions of the product and use automation to find expedient ways to get answers.
Testers are said to be advocates for the customer, but when do most testers come face to
face with a real-life customer? I don’t mean internal stakeholders, but the people at the sharp end
of things, the ones actually using the software. Rarely, I find. Which is why it can be a SHOCK!
to be asked to participate in a customer support call. It’s an unusual situation, there’s pressure, the
customer is watching, something needs fixing, and there’s a deadline ... of yesterday. Gulp. But don’t
worry! You’re on the call because a colleague values your input. Perhaps you’re great at analysis, or
lateral thinking, or problem-solving. Maybe you have deep knowledge of your product, or the whole
ecosystem, or the historical angle. You could be there for questions, or answers, or honesty when you
don’t have either. These kinds of tools from your testing toolbox are valuable on support calls and in
this talk I’ll say how and why. I’ll also give an intro to customer support, talk about how to prepare for
calls, what to do during and after them, and - importantly - what you can take away personally, for
your product, and for your team.
From SoftTest 2018, http://softtest.ie/wp-content/uploads/2018/08/SoftTest_Programme_2018_20180829.pdf
Elisabeth Hendrickson’s book, Explore It!, contains this definition: “Tested = Checked + Explored”. When I read it, I was fascinated. “What does that mean?”, I asked myself, “what does it /really/ mean?”
This talk described the journey I undertook to understand it, and other definitions of testing that I found along the way, and then to come up with a new definition that filled the gaps I saw in the others, without losing the aspects of them that I felt were valid and useful.
Essentially, I formalised what testing is for me. And, now that I have my definition, I can ask myself in any given situation whether my actions are consistent with the way I believe I want to behave.
Testing All the Way Down, and Other DirectionsJames Thomas
Slides from my talk at CEWT #3, http://qahiccupps.blogspot.co.uk/2016/11/testing-all-way-down-and-other.html
The idea that testing is or can be a recursive activity - or even fractal - has some currency. In that view, a test or experiment generates some data, which suggests new experiments, which generate some data, which suggest new experiments and so on. The kinds of activities being done at each stage will be self-similar and testing is used as a kind of microscope to focus in on some aspect of the system under test. Testing all the way down.
In this talk, I'll instead view testing as a number of different instruments that can be used in an arbitrary number of dimensions. Further, I'll suggest that testing can be applied not only to a system, but to descriptions of that system, to models of that system, to abstractions of that system, to a system which is testing that system, and to a system which is testing the system which is testing that system. And so on. It's testing all the way round.
I'll finish by proposing a definition of testing that I think might capture this wide applicability.
Break to build - the mindset of the modern day testerViktor Slavchev
I spent last couple of years performing, talking, writing and listening about software testing.
But what is software testing? I am told my job is to “break software”. But why break it, it looks good?! I like the programmers, they are my friends. And, as Michael Bolton says, “We don’t break software, it was already broken when we got it”.
I sure don’t break software for living, but I do something way better and much more satisfying - I break clichés about software testing.
So, my job as your guide in your journey in testing will be to break some clichés from the past in order to build the mindset of the modern tester.
When considering how automation can be useful in testing, many people think only about identifying difference from some expected behaviour, for example with unit tests.
Much of my testing is about exploration, about learning, about finding things that might matter that we haven't already thought about. In this talk, I'll discuss how I use automation to help me to do that. In particular I'll demo some ways in which I ask questions of the product and use automation to find expedient ways to get answers.
Testers are said to be advocates for the customer, but when do most testers come face to
face with a real-life customer? I don’t mean internal stakeholders, but the people at the sharp end
of things, the ones actually using the software. Rarely, I find. Which is why it can be a SHOCK!
to be asked to participate in a customer support call. It’s an unusual situation, there’s pressure, the
customer is watching, something needs fixing, and there’s a deadline ... of yesterday. Gulp. But don’t
worry! You’re on the call because a colleague values your input. Perhaps you’re great at analysis, or
lateral thinking, or problem-solving. Maybe you have deep knowledge of your product, or the whole
ecosystem, or the historical angle. You could be there for questions, or answers, or honesty when you
don’t have either. These kinds of tools from your testing toolbox are valuable on support calls and in
this talk I’ll say how and why. I’ll also give an intro to customer support, talk about how to prepare for
calls, what to do during and after them, and - importantly - what you can take away personally, for
your product, and for your team.
From SoftTest 2018, http://softtest.ie/wp-content/uploads/2018/08/SoftTest_Programme_2018_20180829.pdf
Elisabeth Hendrickson’s book, Explore It!, contains this definition: “Tested = Checked + Explored”. When I read it, I was fascinated. “What does that mean?”, I asked myself, “what does it /really/ mean?”
This talk described the journey I undertook to understand it, and other definitions of testing that I found along the way, and then to come up with a new definition that filled the gaps I saw in the others, without losing the aspects of them that I felt were valid and useful.
Essentially, I formalised what testing is for me. And, now that I have my definition, I can ask myself in any given situation whether my actions are consistent with the way I believe I want to behave.
Testing All the Way Down, and Other DirectionsJames Thomas
Slides from my talk at CEWT #3, http://qahiccupps.blogspot.co.uk/2016/11/testing-all-way-down-and-other.html
The idea that testing is or can be a recursive activity - or even fractal - has some currency. In that view, a test or experiment generates some data, which suggests new experiments, which generate some data, which suggest new experiments and so on. The kinds of activities being done at each stage will be self-similar and testing is used as a kind of microscope to focus in on some aspect of the system under test. Testing all the way down.
In this talk, I'll instead view testing as a number of different instruments that can be used in an arbitrary number of dimensions. Further, I'll suggest that testing can be applied not only to a system, but to descriptions of that system, to models of that system, to abstractions of that system, to a system which is testing that system, and to a system which is testing the system which is testing that system. And so on. It's testing all the way round.
I'll finish by proposing a definition of testing that I think might capture this wide applicability.
Break to build - the mindset of the modern day testerViktor Slavchev
I spent last couple of years performing, talking, writing and listening about software testing.
But what is software testing? I am told my job is to “break software”. But why break it, it looks good?! I like the programmers, they are my friends. And, as Michael Bolton says, “We don’t break software, it was already broken when we got it”.
I sure don’t break software for living, but I do something way better and much more satisfying - I break clichés about software testing.
So, my job as your guide in your journey in testing will be to break some clichés from the past in order to build the mindset of the modern tester.
Automation vs. intelligence - "follow me if you want to live"Viktor Slavchev
Have you ever heard the story that your job is automatable, that all the human testers will be replaced by machines or automated tests and you will lose your job? Or even worse, that machines and artificial intelligence will take over our craft and our life and we will be totally useless. Do you buy these? Are you afraid?
“Come with me, if you want to live” – this was the famous line that many members of the Human resistance in the Terminator franchise used, when offering their help in the war against Skynet.
So, come with me (and John Connor), and join the testing resistance to fight on the side of intellect against the evil machine army. I am willing to challenge the I part in AI on contest by focusing on few key topics:
Can we translate testing into machine language? Polymorphic and mimeomorphic actions – what are these?
Do we really know what are the benefits of human testing? What are human testers irreplaceable for?
Do we really have empirical evidence that computers are capable of doing professional testing? Do we have evidence of “intelligence” at all?
Last year at RTC ‘17 I was asked – “Is AI the answer to all test automation problems?”. My answer is “No, it’s not!”. And this talk is my explanation why.
A/B Testing and the Infinite Monkey TheoryUseItBetter
Surveys show that on average only 1 out of 7 A/B tests run by e-commerces end up to be successful. Lukasz Twardowski, the CEO of UseItBetter, tries to explain how some of the most successful online businesses master this process turning it into iterative, evidence-led experimentation at scale programme.
Building a Testing Playbook by Andrew RichardsonDelphic Digital
A testing playbook combines the best practices of testing and optimization, along with communication strategies, education, and gaining buy-in from your client. Andrew Richardson, Senior Director of Analytics at Delphic Digital, provides a peek behind the curtain to reveal how Delphic prioritizes tests, recruits/trains/staffs-up for a testing practice, and moved from A/B to multivariate testing. Come with and open mind, walk away with a Testing Playbook Template you can put to use at once.
Software Testing’s Future—According to Lee CopelandTechWell
The original IEEE 829 Test Documentation standard is thirty years old this year. Boris Beizer’s first book on software testing, Software Testing Techniques, also passed thirty. Testing Computer Software, the best-selling book on software testing, is more than twenty five. During the past three decades, hardware platforms have evolved from mainframes to minis to desktops to laptops to smartphones to tablets. Development paradigms have shifted from waterfall to agile. Consumers expect more functionality, demand higher quality, and are less loyal to brands. The world has changed dramatically—and testing must change to match it. Testing processes that helped us succeed in the past may prevent our success in the future. Lee Copeland shares his insights into the future of testing, including his views in the areas of technology, organization, test processes, test plans, and automation. Join Lee for a thought-provoking look at creating a better testing future.
Agile Testers: Becoming a key asset for your teamgojkoadzic
Slides for a presentation titled "Agile Testers: Becoming a Key Asset for your team" given at the Next Generation Testing Executive Briefing on 19 May 2010 in London
Join Julian Harty as he discusses how to use Polychrome Testing and emotions to significantly improve how you communicate and how you test software in future.
Things Could Get Worse: Ideas About Regression TestingTechWell
Michael Bolton, DevelopSense
Tester, consultant, and trainer Michael Bolton is the coauthor (with James Bach) of Rapid Software Testing, a course that presents a methodology and mindset for testing software expertly in uncertain conditions and under extreme time pressure. Michael is a leader in the context-driven software testing movement with twenty years of experience testing, developing, managing, and writing about software. Currently, he leads DevelopSense, a Toronto-based consultancy.
Check This - Test Automation, A Development Managers ViewStephen Janaway
Test automation belongs to the testers and as testers we care about quality more than the rest of the development team do, right? It’s easy to think this. I know, I’ve been there, as a Tester and a Test Manager.
But now I manage the whole development team and can see how the whole team should use test automation. And how we could get more efficient as a team when we all became responsible for quality.
Demand for software testers has grown manifold in recent years. Here is a list of reasons as to why it is a great career option for the youth or young IT enthusiasts
What I Learned About Software Marketing and Growth After 2 Years in Venture C...Kyle Lacy
This deck is being used for a presentation at High Alpha's Marketing Forum in Indianapolis on 1/2/17. I've spent the past two years working at OpenView, a venture capital firm based in Boston.
The lessons I learned watching the partners, my colleagues, portfolio CEOs, countless pitch meetings and hundreds of conversations between leadership teams was invaluable.
Here are a few.
Automation vs. intelligence - "follow me if you want to live"Viktor Slavchev
Have you ever heard the story that your job is automatable, that all the human testers will be replaced by machines or automated tests and you will lose your job? Or even worse, that machines and artificial intelligence will take over our craft and our life and we will be totally useless. Do you buy these? Are you afraid?
“Come with me, if you want to live” – this was the famous line that many members of the Human resistance in the Terminator franchise used, when offering their help in the war against Skynet.
So, come with me (and John Connor), and join the testing resistance to fight on the side of intellect against the evil machine army. I am willing to challenge the I part in AI on contest by focusing on few key topics:
Can we translate testing into machine language? Polymorphic and mimeomorphic actions – what are these?
Do we really know what are the benefits of human testing? What are human testers irreplaceable for?
Do we really have empirical evidence that computers are capable of doing professional testing? Do we have evidence of “intelligence” at all?
Last year at RTC ‘17 I was asked – “Is AI the answer to all test automation problems?”. My answer is “No, it’s not!”. And this talk is my explanation why.
A/B Testing and the Infinite Monkey TheoryUseItBetter
Surveys show that on average only 1 out of 7 A/B tests run by e-commerces end up to be successful. Lukasz Twardowski, the CEO of UseItBetter, tries to explain how some of the most successful online businesses master this process turning it into iterative, evidence-led experimentation at scale programme.
Building a Testing Playbook by Andrew RichardsonDelphic Digital
A testing playbook combines the best practices of testing and optimization, along with communication strategies, education, and gaining buy-in from your client. Andrew Richardson, Senior Director of Analytics at Delphic Digital, provides a peek behind the curtain to reveal how Delphic prioritizes tests, recruits/trains/staffs-up for a testing practice, and moved from A/B to multivariate testing. Come with and open mind, walk away with a Testing Playbook Template you can put to use at once.
Software Testing’s Future—According to Lee CopelandTechWell
The original IEEE 829 Test Documentation standard is thirty years old this year. Boris Beizer’s first book on software testing, Software Testing Techniques, also passed thirty. Testing Computer Software, the best-selling book on software testing, is more than twenty five. During the past three decades, hardware platforms have evolved from mainframes to minis to desktops to laptops to smartphones to tablets. Development paradigms have shifted from waterfall to agile. Consumers expect more functionality, demand higher quality, and are less loyal to brands. The world has changed dramatically—and testing must change to match it. Testing processes that helped us succeed in the past may prevent our success in the future. Lee Copeland shares his insights into the future of testing, including his views in the areas of technology, organization, test processes, test plans, and automation. Join Lee for a thought-provoking look at creating a better testing future.
Agile Testers: Becoming a key asset for your teamgojkoadzic
Slides for a presentation titled "Agile Testers: Becoming a Key Asset for your team" given at the Next Generation Testing Executive Briefing on 19 May 2010 in London
Join Julian Harty as he discusses how to use Polychrome Testing and emotions to significantly improve how you communicate and how you test software in future.
Things Could Get Worse: Ideas About Regression TestingTechWell
Michael Bolton, DevelopSense
Tester, consultant, and trainer Michael Bolton is the coauthor (with James Bach) of Rapid Software Testing, a course that presents a methodology and mindset for testing software expertly in uncertain conditions and under extreme time pressure. Michael is a leader in the context-driven software testing movement with twenty years of experience testing, developing, managing, and writing about software. Currently, he leads DevelopSense, a Toronto-based consultancy.
Check This - Test Automation, A Development Managers ViewStephen Janaway
Test automation belongs to the testers and as testers we care about quality more than the rest of the development team do, right? It’s easy to think this. I know, I’ve been there, as a Tester and a Test Manager.
But now I manage the whole development team and can see how the whole team should use test automation. And how we could get more efficient as a team when we all became responsible for quality.
Demand for software testers has grown manifold in recent years. Here is a list of reasons as to why it is a great career option for the youth or young IT enthusiasts
What I Learned About Software Marketing and Growth After 2 Years in Venture C...Kyle Lacy
This deck is being used for a presentation at High Alpha's Marketing Forum in Indianapolis on 1/2/17. I've spent the past two years working at OpenView, a venture capital firm based in Boston.
The lessons I learned watching the partners, my colleagues, portfolio CEOs, countless pitch meetings and hundreds of conversations between leadership teams was invaluable.
Here are a few.
These are the slides the keynote I did at Romanian Testing Conference 2018.
Please feel free to ask me if you have any questions since most of the slides are images.
Testing the unknown: the art and science of working with hypothesisArdita Karaj
Testing what we know, or have a clear understanding of, is relatively straight forward, as is making decisions based on the expected result. But today’s world is presenting us with the Unknown and the Ambiguous, which can only be approached by hypothesizing and experimenting - a lot! This requires intentional thinking, and a different strategy to observe in context.
This session will uncover how testers are helping their teams and product owners, by basing their testing on the science behind creating hypotheses and running experiments. A testing mindset and probing the context around use cases are some of the most valuable competencies testers bring to the team in order to enable decisions based on data.
Why You Don't Want to be a Tester; an agile discussionBrett Tramposh
"Why You Don't Want to be a Tester" focuses on a common discussion we are having among Quality Assurance and Software Testing professionals, especially as it relates to operating as part of an agile team.
In a recent discussion at the Software QA User Group in Portland Oregon, Brett used these slides to foster conversation and to promote the idea that each person should be proactive in their approach to not allow their role to simply become a tester. Solid QA practices are needed more today than ever as we move fast and raise the bar on quality and continually add to our tool belt!
Testing is Not a 9 to 5 Job - talk by industry executive Mike LylesApplitools
** FULL WEBINAR RECORDING: https://youtu.be/IC6ul_-PLj8 **
Find your hire power: Learn what managers look for when hiring (and firing...) testers.
Being an expert tester is no different. While the art and craft of testing and being a thinking tester is something that is built within you, simply going to work every day and being a tester is not always enough.
Each of us can become “gold medal testers” by practicing, studying, refining our skills, and building our craft.
In this webinar, we will evaluate extracurricular activities and practices that will enable you to grow from a good tester to a great tester.
Listen to this webinar, and enjoy these key takeaways:
** Inputs from testing experts on how they improve their skills
** Suggestions for online training and materials, which should be studied
** How to leverage social media to interact with the testing community
** Contributions you can make to the testing community to build your name as a leading test engineer
Test Estimation Hacks: Tips, Tricks and Tools WebinarQASymphony
In this webinar, Matt Heusser explains how not only how to deal with tough questions, but how to prepare and defend estimates that stand up to scrutiny. The conversation includes six estimating models - comparison, functional decomposition, timeboxed, and prediction, along the Guru Method and, perhaps, a little on #NoEstimates.
Don’t miss this opportunity to learn:
Learn the common mistakes in software test estimation
How Testing is different than linear tasks like development (and how to talk about it)
Learn what goes wrong in discussions about schedule
An explanation of ways to estimate for test - by comparison, functional decomposition, timeboxing, prediction and the guru method
How to recognize when you are actually in test negotiation, not test estimation...and what to do about it
Matt Heusser will discusses these topics and much, much more! Watch now: http://pi.qasymphony.com/test-estimation-hacks-webinar-lp057
Bad Experiments: The #18 Ways You’re A/B Tests are Going Wrong.Martijn Scheijbeler
Learn from the 18+ mistakes that I’ve made while running experimentation programs & how to overcome them. I presented this deck at Growth Marketing Conference and LAUNCH Scale, both in San Francisco.
A test strategy brings teams together to establish a foundational set of principles of quality into their apps together. Teams vary, and products and applications vary, so how do we build a testing strategy tailored to your needs that helps teams build quality products?
In this session, we will look at the role of automation within a test strategy. I’ll provide heuristics to help guide you in identifying how many you need and what types of tests to automate. Then we will investigate the personas involved within automation projects. The creators, executors, and consumers of automation to help you tune your strategy to fit your team.
Finally, I’ll share a model that teams can use to help guide them in crafting and discussing their own test strategy. A strategy that sets a foundation for quality, identifies where the app needs support, and how that risk is being managed through the software development lifecycle.
Marketing is Dead. Only Moments Matter - UserTesting Roadshow - 10/5/2016Kyle Lacy
Another adjustment to my Marketing is Dead deck which covers how to evolve in the digital environment. The only thing that matters or should matter to digital marketing is the experience the consumer is having with your brand.
How to Increase Your Testing Success by Combining Qualitative and Quantitativ...Optimizely
Hiten Shah, President and Co-Founder, KISSmetrics and Crazy Egg
The majority of A/B tests that you run end up failing. Wouldn't it be great if you could increase your chance of success?
In this session Hiten Shah, President and Co-Founder of KISSmetrics and Crazy Egg provides a framework and examples of how to increase your success rate by using both qualitative and quantitative tactics. Learn how to design great experiments by understanding more about your visitors, users and customers.
What is A/B testing? A SCIENTIFIC METHOD OF PROOF
WHAT IS A/B TESTING FOR MARKETING
Where do you start testing? THE DON’TS
WHERE DO YOU START TESTING?
DON’T JUST GUESS
SUB OPTIMIZE • Button colors • Small copy changes • Small layout changes • Fonts, font size
WATCH OUT FOR BUZZ • Don’t just test what the gurus say • What works for others may not work for you
PREMATURE TESTING • Testing without traffic • Before you know what and why you are testing
Where do you start testing? THE DOS
DO YOUR HOMEWORK You have to put in the work to get great results • Learn as much as you can about A/B testing • Study the great people out there • What’s their process and insights?
A framework for success The basis of a successful testing program
Research Know what to test and see the greatest improvements
DIVE INTO YOUR DATA
FIND YOUR BIGGEST OPPORTUNITIES
INVESTIGATE THE OPPORTUNITY TO UNDERSTAND IT
FORM A HYPOTHESIS By doing A, B will happen, Because of C
BUILD THE TEST
LAUNCH THE TEST VS. Control Variant
THE TEST VS. +24.01% increase in registrations!! 100% Significance
DOCUMENT AND SHARE YOUR FINDINGS Documenting all your tests and results • Increases company transparency • Helps others know where tests are going on • Makes it easier to call tests • Makes learnings available for everyone in company
REPEAT THE PROCESS AS OFTEN AS POSSIBLE
Control vs. Variant
“Only 1 out of 8 A/B tests have driven significant change.” Noah Kagan AppSumo*
“The goal of a test is to get a learning, not a lift. With enough learnings, you can get the real lift,” Dr. Flint McGlaughlin MECLABS
RULES OF THUMB FOR TESTING Things to remember when we are testing • Min. 2000 visitors • Min. a few hundred conversions • At least one full cycle > 2 week • >99% statistical significance • Keep a backlog of test hypothesis • Build next tests as the current one is running *More info at http://kiss.ly/1vF4wUc and http://kiss.ly/1Q4vTm0
Tooltips AND BURGERS
How I like my testing burger Tooltips for your testing framework
Tool Company Why do I like it? Analytics Kissmetrics Shows me my actual customers and their behavior Surveys Qualaroo and/or SurveyMonkey Easy to use/Deep qualitative insights Project Management Trello Manage my time + communicate with collaborators
Tool Company Why do I like it? Company Wiki Confluence Document initial thoughts for everyone Testing platform* Optimizely, Pardot, Adwords Easily build tests and get data Analytics Kissmetrics Let’s me confirm and verify down my funnel Company Wiki Confluence Documentation and learnings for everyone *Includes actual testing platform and tools that have testing capabilities
Not everyone likes their burger the same way How do you take yours?
Trust the process Share your leanings Stack your results A/B TESTING FOR SUCCESS
Optimizely & Photobox - DON'T PANIC: The No-Confusion Experimentation Startup...Optimizely
How do you know where to start with experimentation? What if you don’t have enough information, or simply too much to decide where to begin and where to invest your time/effort/money?
In this breakout session we will cover how to cut the BS by treating experimentation as an “internal services startup”, where the customers are the teams in your business: commercial, trading, marketing, product, SEO etc.
You wouldn’t start a startup by hiring a bunch of people without a tool or an idea to work on, or buy an office or expensive work management solution for a startup of 3 people without developing a product and taking it to market first. So why treat experimentation that way?
Three related coverage risks stood out when I begain to test a chatbot API for a medical symptom checker. With an infinite space of possible chats, how could we:
• look for unintended consequences of changes as we built the API
• discover some of the edge and corner cases bugs that would surely exist
• exercise the API to any significant extent after each iteration
To help mitigate these risks I built a client which would randomly walk through dialogs, unattended, and report on what it had found.
In this talk, I'll describe how I implemented that client by iteratively adding functionality that I hoped would facilitate my exploration of changes and fixes to the emerging API. I'll give examples of features that worked well (such as configuration of probabilities for different types of answers) and those that did not (such as checking for specific classes of medical outcome), explain how I built on top of the client to make a load testing tool, and think about what I'd do differently next time.
Three related coverage risks stood out when I joined a new project to build a chatbot API for a medical symptom checker. With an infinite space of possible chats, how could we:
1. look for unintended consequences of changes.
2. discover some of the edge and corner cases bugs.
3. exercise the API significantly.
To help mitigate these risks I built a client which would randomly walk through dialogs, unattended, and report
on what it had found.
In this talk, I'll describe how I implemented that client by iteratively adding functionality that I hoped would
facilitate my exploration of changes and fixes to the emerging API. I'll give examples of features that worked
well (such as configuration of probabilities for different types of answers) and those that did not (such as checking for specific classes of medical outcome),
explain how I built on top of the client to make a load testing tool, and think about what I'd do differently next time.
There are terms in our domain, terms that are fundamental to our work, terms like quality, bug, and even testing itself, that many testers would struggle to define. I’d say it’s an open secret within testing, but would it surprise our colleagues?
From CEWT #7, https://cewtblog.blogspot.com/search/label/CEWT%237
My lightning talk at the Cambridge Agile Exchange on how traditional management books have lots to offer us, even in a modern, agile, working environment. In particular, management done well is all about people.
In this talk I'll assume that we know what good testing is (for our context, at this time) and wonder how we can judge, during recruitment, that a person being interviewed for a role at our company could do that good testing for us.
There is a perceived tension between theory and practice, and between theorists and practitioners. In this talk, we will propose and illustrate using a practical example that practice generates data and theory is the data which we care about. Rather than focusing on theory over practice or practice over theory, a choice of theory, practice, or both is driven by the data needed for a particular task and contextual factors.
I consider whether we as testers can be too closed-minded in our attitudes, whether there are schools of thought or approaches that, even if we care deeply about context, we are very unlikely even to consider and perhaps that we sometimes favour our reputation over giving ourselves the chance to do the best job that we can.
From CEWT#2, http://cewtblog.blogspot.co.uk/2016/02/cewt-2-abstracts.html
My presentation from EuroSTAR 2015:
Your testing is a joke. Or, rather, some parts of some of the testing done by some people reading this will be somewhat analogous to some subset of what some people would accept as jokes. Sometimes. Language can be tricky like that. And that’s one of the things that makes it such a productive tool for jokes, and such a flawed tool for specification.
Edward de Bono, in his Lateral Thinking books, makes a strong connection between humour and creativity. Creativity is a key to testing, but jokes? Well, the punchline for a joke could be a violation of some expectation, the exposure of some ambiguity, an observation that no one else has made, or just making a surprising connection. Jokes can make you think and then laugh. But they don’t always work. Does that sound familiar?
At Linguamatics we have a weekly caption competition. I wondered what my process for creating entries was and as I spent more time thinking about it, I started to notice parallels with the way that I think about how I test. For instance, I might take each of the key entities in the picture and “factor” them – generate a list of features, related concepts, synonyms and so on. In testing I might then look for overlapping factors for potentially interesting test ideas, in the quest for a caption I might try to use the same approach to find an ambiguity and hence a joke.
In this talk I’ll take a genuine joke-making process and de-construct it to make comparisons between aspects of joking and concepts from testing such as the difference between a fault and a failure, oracles, heuristics, factoring, modelling testing as the exploration of a space of possibilities, stopping strategies, bug advocacy and the possibility that a bug, today, in this context might not be one tomorrow or in another.
Yes, there will be some jokes in the session. And I’ll try explain why the groans you’ll hear are a good sign too.
My talk delivered at the UK Test Management Forum on 2015-07-29. http://uktmf.com/?q=node/5283
As a test manager I don't test as much as I'd like to so I try to find ways to stay loose and ready for those occasions where I get the chance.
In this talk I'll describe one activity based on joking that I think can fit the bill. How? Well, the punchline for a joke could be a violation of some expectation, the exposure of some ambiguity, an observation that no one else has made, or just making a surprising connection. Jokes can make you think and then laugh. But they don't always work. Does that sound familiar?
It started with the weekly caption competition at Linguamatics where I noticed parallels in my approach to it and testing. For instance, I might take each of the key entities in the picture and "factor" them - generate a list of features, related concepts, synonyms and so on. In testing I might then look for overlapping factors for potentially interesting test ideas, in the quest for a caption I might try to use the same approach to find an ambiguity and hence a joke. Doing this, I've found analogies between joking and concepts from testing such as oracles, heuristics, factoring, stopping strategies, bug advocacy and the possibility that a bug, today, in this context might not be one tomorrow or in another.
I'm interested to find out from the audience what things they "just do" that they feel helps them.
This presentation talks about the use of analogy as a device for generating ideas at multiple levels of testing including test activity, methodology and reporting. In it, I associate analogy with lateral thinking and give an example of a specific analogy that I'm interested in at the moment.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Designing for Privacy in Amazon Web ServicesKrzysztofKkol1
Data privacy is one of the most critical issues that businesses face. This presentation shares insights on the principles and best practices for ensuring the resilience and security of your workload.
Drawing on a real-life project from the HR industry, the various challenges will be demonstrated: data protection, self-healing, business continuity, security, and transparency of data processing. This systematized approach allowed to create a secure AWS cloud infrastructure that not only met strict compliance rules but also exceeded the client's expectations.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Modern design is crucial in today's digital environment, and this is especially true for SharePoint intranets. The design of these digital hubs is critical to user engagement and productivity enhancement. They are the cornerstone of internal collaboration and interaction within enterprises.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
17. DEFINITIONS
Executing a program with the intent of finding errors – Meyers (1976)
Finds information … that informs critical decisions – Kaner, Bach, Pettichord (2002)
Interact with SUT, observe behaviour, compare to expectations – Hendrickson (2013)
@qahiccupps
WHATISTESTING?
20. Testing is simple: you
understand what is important
and then you test it.
Rikard Edgren, EuroSTAR 2015
@qahiccupps
WHATISTESTING?
21. Testing a sub-sub-feature
Testing a sub-feature
Testing a feature
Testing the model
Testing the PO’s view
Testing the links between features
Testing another feature
Testing PO’s expression of their view
Testing the way I’m talking to the PO
Testing whether the PO is the best
person to talk to
Testing the end user need
Testing the feature testing
Testing the reason for testing
@qahiccupps
WHATISTESTING?
27. WHAT IS TESTING TO ME?
Testing is the pursuit of relevant incongruity
@qahiccupps
WHATISTESTING?
28. SO WHAT?
It keeps me focussed on what matters
I can tell whether I’m testing or not
I can decide whether to test or not
@qahiccupps
WHATISTESTING?
29. WHAT IS TESTING?
Intentional, directed, responsive
About more than the software
Interpretation and choices made in context
@qahiccupps
WHATISTESTING? -TESTINGTOOLS - STARTINGTESTING
43. COMPARED TO WHAT?
Our competitor's software is fast. Fast ... compared to what?
We must export to a good range of image formats. Good ... compared to what?
The layout must be clean. Clean ... compared to what?
@qahiccupps
TESTINGTOOLS
44. ORACLE
Something to compare to
Helps decide whether there’s an interesting difference
@qahiccupps
TESTINGTOOLS
46. CRITICAL THINKING
@qahiccupps
I see X, the oracle says Y. Must be a bug!
I see X, the oracle says Y. Is this a reasonable oracle?
I see X, the oracle says Y. Did I compare correctly?
TESTINGTOOLS
47. All models are wrong but some are
useful.
George Box
@qahiccupps
Models are oracles.
Oracles are heuristic.
TESTINGTOOLS
49. A problem is a difference between things
as desired and things as perceived.
Donald C. Gause and Gerald M. Weinberg
For any abstract X, X is X to some
person, at some time.
Michael Bolton
@qahiccupps
TESTINGTOOLS
50. WHAT AND WHEN IS REALLY
THE PROBLEM HERE?
The thing
The perception of that thing
The desires for that thing
The person(s) desiring or perceiving
The context(s) in which the desiring or perceiving is taking place
@qahiccupps
TESTINGTOOLS
51. Donald C. Gause and Gerald M. Weinberg
If you can't think of at least three things
that might be wrong with your
understanding of the problem, you don't
understand the problem.
@qahiccupps
TESTINGTOOLS
55. Billy Vaughn Koen
Do what you think represents best¹
practice² at the time you must decide.
1. best in context
2. practices are heuristic
@qahiccupps
TESTINGTOOLS
64. STARTING CAN BE HARD
But starting testing is starting learning
@qahiccupps
STARTINGTESTING
65. Begin where you are.
Keri Smith
Testing is simple: you
understand what is important
and then you test it.
Rikard Edgren, EuroSTAR 2015
@qahiccupps
STARTINGTESTING
67. FOR EXAMPLE…
Improve your technical understanding
Look for areas of highest risk (of what, to who)
Build rapport with your team and stakeholders
@qahiccupps
STARTINGTESTING
68. EXPLORE X USING Y TO Z
Outcome focus
Open approach
Timebox
@qahiccupps
Explore the new component
using the command line
to look for incompatible options.
Elisabeth Hendrickson
STARTINGTESTING
69. STARTING TESTING
Begin where you are
Know your context
Find your mission
@qahiccupps
WHATISTESTING? -TESTINGTOOLS - STARTINGTESTING
72. Pairing README
• I will pair on any task on any subject
• I will try any format of pairing
• We’ll agree a mission and debrief afterwards
• If deep prep is needed, tell me …
• … if not, I'll just drop into it cold
@qahiccupps
SOMETHINGTOTEST
73. HOW I TEST ANYTHING
My definition of testing
Some of my testing tools
How I choose where to start
@qahiccupps