A/B Testing and the Infinite Monkey TheoryUseItBetter
Surveys show that on average only 1 out of 7 A/B tests run by e-commerces end up to be successful. Lukasz Twardowski, the CEO of UseItBetter, tries to explain how some of the most successful online businesses master this process turning it into iterative, evidence-led experimentation at scale programme.
1. Practice and experimentation are important for building expertise, whether for athletes, musicians, pilots, doctors, or scientists. Pre- and post-analysis helps identify areas for improvement.
2. Simulations that mimic real-world stressors are critical for pilots, doctors, and firefighters to practice their skills and prevent mistakes. Checklists can help ensure important steps are not missed.
3. When experimenting scientifically, experimental design and accounting for biases are important. Replicates, controls, and representative samples help establish confidence in results.
Adaptive Change Cycle Applied to Agile MethodsNaresh Jain
Why do we see new process/methodology/movement every 10 years or so? Here I explain the rationale behind this behavior using the concept of Adaptive Change Cycle.
Test automation has some bitter truths that are often overlooked. While automation can help with confirmation testing of deterministic scenarios, many critical testing tasks like exploration and qualitative evaluation are not easily automated. Automation also does not necessarily decrease the costs of testing when development, maintenance, and debugging of automation code is considered. It is more accurate to consider test automation as programmatic testing rather than assuming full automation is possible. Both automated and manual testing are misleading terms, and the focus should be on using tools like automation to extend testing rather than replace human testing.
The document discusses "worst practices" of software testing according to The Testing Troll. It provides 5 "worst practices" and alternatives suggested by The Testing Troll. The first is to learn about real testing oracles rather than relying only on requirements. The second is to focus regression testing on tests that reveal new information rather than repetitive testing. The third is to use automation as a tool to extend abilities rather than replace manual testing. The fourth is to provide information about risks through risk-based testing rather than just assuring quality. And the fifth is to be always alert to your context rather than following best practices blindly. The Testing Troll advocates thinking critically and focusing on exploration and human aspects of testing over fixed processes.
Automation vs. intelligence - "follow me if you want to live"Viktor Slavchev
Have you ever heard the story that your job is automatable, that all the human testers will be replaced by machines or automated tests and you will lose your job? Or even worse, that machines and artificial intelligence will take over our craft and our life and we will be totally useless. Do you buy these? Are you afraid?
“Come with me, if you want to live” – this was the famous line that many members of the Human resistance in the Terminator franchise used, when offering their help in the war against Skynet.
So, come with me (and John Connor), and join the testing resistance to fight on the side of intellect against the evil machine army. I am willing to challenge the I part in AI on contest by focusing on few key topics:
Can we translate testing into machine language? Polymorphic and mimeomorphic actions – what are these?
Do we really know what are the benefits of human testing? What are human testers irreplaceable for?
Do we really have empirical evidence that computers are capable of doing professional testing? Do we have evidence of “intelligence” at all?
Last year at RTC ‘17 I was asked – “Is AI the answer to all test automation problems?”. My answer is “No, it’s not!”. And this talk is my explanation why.
Athletes, Firemen and Doctors train everyday to be the best at their chosen profession. As engineers, we spend much of our time getting stuff to production and making sure our infrastructure doesn’t burn down out right. In this talk, we'll discuss the need for and the options of creating a game day culture. Where we as engineers not only write, maintain and operate our software platforms but actively pursue ways to learn and predict its (non-functional) behavior. We'll look at tools like toxiproxy and the simian army for ways to prepare teams to tweak their testing and monitoring setup and work instructions to quickly observe, react to and resolve problems.
Building a Testing Playbook by Andrew RichardsonDelphic Digital
A testing playbook combines the best practices of testing and optimization, along with communication strategies, education, and gaining buy-in from your client. Andrew Richardson, Senior Director of Analytics at Delphic Digital, provides a peek behind the curtain to reveal how Delphic prioritizes tests, recruits/trains/staffs-up for a testing practice, and moved from A/B to multivariate testing. Come with and open mind, walk away with a Testing Playbook Template you can put to use at once.
A/B Testing and the Infinite Monkey TheoryUseItBetter
Surveys show that on average only 1 out of 7 A/B tests run by e-commerces end up to be successful. Lukasz Twardowski, the CEO of UseItBetter, tries to explain how some of the most successful online businesses master this process turning it into iterative, evidence-led experimentation at scale programme.
1. Practice and experimentation are important for building expertise, whether for athletes, musicians, pilots, doctors, or scientists. Pre- and post-analysis helps identify areas for improvement.
2. Simulations that mimic real-world stressors are critical for pilots, doctors, and firefighters to practice their skills and prevent mistakes. Checklists can help ensure important steps are not missed.
3. When experimenting scientifically, experimental design and accounting for biases are important. Replicates, controls, and representative samples help establish confidence in results.
Adaptive Change Cycle Applied to Agile MethodsNaresh Jain
Why do we see new process/methodology/movement every 10 years or so? Here I explain the rationale behind this behavior using the concept of Adaptive Change Cycle.
Test automation has some bitter truths that are often overlooked. While automation can help with confirmation testing of deterministic scenarios, many critical testing tasks like exploration and qualitative evaluation are not easily automated. Automation also does not necessarily decrease the costs of testing when development, maintenance, and debugging of automation code is considered. It is more accurate to consider test automation as programmatic testing rather than assuming full automation is possible. Both automated and manual testing are misleading terms, and the focus should be on using tools like automation to extend testing rather than replace human testing.
The document discusses "worst practices" of software testing according to The Testing Troll. It provides 5 "worst practices" and alternatives suggested by The Testing Troll. The first is to learn about real testing oracles rather than relying only on requirements. The second is to focus regression testing on tests that reveal new information rather than repetitive testing. The third is to use automation as a tool to extend abilities rather than replace manual testing. The fourth is to provide information about risks through risk-based testing rather than just assuring quality. And the fifth is to be always alert to your context rather than following best practices blindly. The Testing Troll advocates thinking critically and focusing on exploration and human aspects of testing over fixed processes.
Automation vs. intelligence - "follow me if you want to live"Viktor Slavchev
Have you ever heard the story that your job is automatable, that all the human testers will be replaced by machines or automated tests and you will lose your job? Or even worse, that machines and artificial intelligence will take over our craft and our life and we will be totally useless. Do you buy these? Are you afraid?
“Come with me, if you want to live” – this was the famous line that many members of the Human resistance in the Terminator franchise used, when offering their help in the war against Skynet.
So, come with me (and John Connor), and join the testing resistance to fight on the side of intellect against the evil machine army. I am willing to challenge the I part in AI on contest by focusing on few key topics:
Can we translate testing into machine language? Polymorphic and mimeomorphic actions – what are these?
Do we really know what are the benefits of human testing? What are human testers irreplaceable for?
Do we really have empirical evidence that computers are capable of doing professional testing? Do we have evidence of “intelligence” at all?
Last year at RTC ‘17 I was asked – “Is AI the answer to all test automation problems?”. My answer is “No, it’s not!”. And this talk is my explanation why.
Athletes, Firemen and Doctors train everyday to be the best at their chosen profession. As engineers, we spend much of our time getting stuff to production and making sure our infrastructure doesn’t burn down out right. In this talk, we'll discuss the need for and the options of creating a game day culture. Where we as engineers not only write, maintain and operate our software platforms but actively pursue ways to learn and predict its (non-functional) behavior. We'll look at tools like toxiproxy and the simian army for ways to prepare teams to tweak their testing and monitoring setup and work instructions to quickly observe, react to and resolve problems.
Building a Testing Playbook by Andrew RichardsonDelphic Digital
A testing playbook combines the best practices of testing and optimization, along with communication strategies, education, and gaining buy-in from your client. Andrew Richardson, Senior Director of Analytics at Delphic Digital, provides a peek behind the curtain to reveal how Delphic prioritizes tests, recruits/trains/staffs-up for a testing practice, and moved from A/B to multivariate testing. Come with and open mind, walk away with a Testing Playbook Template you can put to use at once.
Means different things to different people.
Whole team approach
Just doesn’t mean testing on agile projects.
It’s not synonym to working extra hours, creating chaos, putting pressure.
Test each increment of coding as soon as its finished.
Quick feedback system enabler.
Collaboration is mightier than anything else.
One size doesn’t fit all.
Context is King.
Break to build - the mindset of the modern day testerViktor Slavchev
I spent last couple of years performing, talking, writing and listening about software testing.
But what is software testing? I am told my job is to “break software”. But why break it, it looks good?! I like the programmers, they are my friends. And, as Michael Bolton says, “We don’t break software, it was already broken when we got it”.
I sure don’t break software for living, but I do something way better and much more satisfying - I break clichés about software testing.
So, my job as your guide in your journey in testing will be to break some clichés from the past in order to build the mindset of the modern tester.
Worst practices in software testing by the Testing trollViktor Slavchev
The document discusses best and worst practices in software testing according to a mythical testing creature called "The Testing Troll".
Some worst practices presented include relying solely on documentation, focusing only on repetitive regression testing of old tests, viewing automation as a replacement for human testers, and strictly following best practices without consideration of context.
The best practices emphasized thinking beyond requirements and oracles, prioritizing regression tests that reveal new information, using automation as a tool to enhance testing abilities rather than replace testers, providing information about potential risks, and being aware of testing context in different situations. The conclusion is that there are no absolute best practices and testers must be skeptical professionals who consider context.
The Snail Entrepreneur: The 7-year-old kid every startup should learn fromClaudio Perrone
Matteo faced a seemly impossible problem, but didn't give up. He used daddy's #PopcornFlow and pivoted. 17 options and 5 experiments later, he converged to success.
PopcornFlow is impacting businesses (large and small) but also families and kids.
If you like this story, please contribute to Matteo's cause.
This document provides lessons learned about automating API testing. It discusses what an API is and reasons for automating API tests, including getting clear responses close to the client experience. The document outlines different types of tests to automate, including status code checks, structure checks, and scenario checks. It also discusses cognitive barriers to testing, test setup and environment, using an automation framework, and splitting test logic from application logic. The goal is not to provide a how-to guide but rather discuss lessons learned about effective API testing.
The 5-Whys technique is used to determine the root cause of a problem by repeatedly asking "Why?" It originated in Japan in the 1920s at Toyota. One starts with a problem statement and asks "Why did this happen?" Four more times to get to the underlying root cause. Examples show tracing a server being down to the root cause being that the CEO was on holiday. The technique involves bringing people together, writing down results, and considering multiple possibilities if there is no agreement on the root cause. Addressing the root cause provides more benefit than just treating symptoms.
Elisabeth Hendrickson’s book, Explore It!, contains this definition: “Tested = Checked + Explored”. When I read it, I was fascinated. “What does that mean?”, I asked myself, “what does it /really/ mean?”
This talk described the journey I undertook to understand it, and other definitions of testing that I found along the way, and then to come up with a new definition that filled the gaps I saw in the others, without losing the aspects of them that I felt were valid and useful.
Essentially, I formalised what testing is for me. And, now that I have my definition, I can ask myself in any given situation whether my actions are consistent with the way I believe I want to behave.
In this webinar, Kevin looks at 10 simple improvements in the way we work which mean that we’re now either catching bugs before they get to the test environment, or, even better, preventing them from happening in the first place.
View webinar recording - https://testhuddle.com/resource/striving-zero-bugs-test-environment/
This document discusses various cognitive biases that can affect testing, including anchoring bias, relativity bias, decoy effect, endowment effect, IKEA effect, confirmation bias, negativity bias, sunk costs fallacy, paradox of choice, and procrastination. It provides examples of how each bias could influence test planning, design, advocacy, estimation, negotiation, and other aspects of the testing process. The document concludes by acknowledging references used to research cognitive biases.
This is a summary of the blogs by Eric Ries on the Five Whys at http://startuplessonslearned.blogspot.com/2008/11/five-whys.html. It was used for an internal presentation at Cogent Consulting. If Eric or anyone else thinks this should not be public I will take it down, but I hope I'll drive (a little) more traffic to his blog :-)
The document introduces Ady Stokes, who has nearly 2 decades of experience in software testing, quality assurance, and risk management. It provides details about Ady's current role managing QA and risk at HML, a highly regulated financial organization with over 42 billion in assets under management. The document then discusses various aspects of software testing such as the importance of having a testing mindset, different types of tests, and qualities of good requirements.
Things Could Get Worse: Ideas About Regression TestingTechWell
Michael Bolton, DevelopSense
Tester, consultant, and trainer Michael Bolton is the coauthor (with James Bach) of Rapid Software Testing, a course that presents a methodology and mindset for testing software expertly in uncertain conditions and under extreme time pressure. Michael is a leader in the context-driven software testing movement with twenty years of experience testing, developing, managing, and writing about software. Currently, he leads DevelopSense, a Toronto-based consultancy.
The document provides an overview of software testing fundamentals including:
1. It discusses key testing concepts like error, fault, failure and how testing helps build confidence and reduce costs. Testing aims to find faults and prove software meets requirements.
2. Testing challenges are discussed like the impossibility of exhaustive testing due to huge number of combinations. Prioritization is important given limited time.
3. Principles of testing are covered such as defects clustering, absence of errors fallacy, and how early testing avoids fault multiplication. Testing must be context dependent.
Fantastic Tests - The Crimes of Bad Test DesignWinston Laoh
The document discusses why testing is important and what causes bad tests. Testing provides comfort and confidence by knowing if new code breaks existing functionality. Bad tests have a wide scope so it's unclear what they test, depend on specific orders or system states, are too similar to the product without the same fragility, or are not similar enough to the user experience. The key is for tests to be independent of order or system state and balance similarity to the product with enough differences to reliably fail when bugs are present.
[CXL Live 16] The Grand Unified Theory of Conversion Optimization by John EkmanCXL
We have come to earth to save humans from bad conversion rates and websites that don't convert visitors. The document discusses the grand unified theory of conversion rate optimization and introduces concepts like the optimization wheel and testing frameworks. It emphasizes testing hypotheses with real data and running many small tests per year to continuously improve conversions through experimentation.
The document summarizes a presentation on myths and dogmas related to usability testing. It discusses several commonly held beliefs and presents data from previous studies to evaluate whether they are truths or myths. Some of the myths addressed include the idea that five users are enough to find most usability problems, that expert reviews are as reliable as usability tests, and that usability testing can be conducted by anyone. The presentation aims to dispel unsupported notions and emphasize the importance of using empirical data to inform best practices in usability work.
Beyond Agile Testing to Lean Development — Rakuten Technology ConferenceJames Coplien
The document discusses moving beyond traditional agile testing approaches to lean testing. It argues that most unit tests are unnecessary and test scenarios that will never occur. It promotes exploratory and experience-based testing over unit testing alone. The document also advocates for shipping tests with code to catch bugs in the field, using assertions to make code more readable, and taking a lean approach where fixing testing processes is prioritized over fixing individual bugs.
Testing is necessary to identify correctness, completeness, and quality of software. There are various techniques for testing software such as white box testing which tests internal logic and structure, and black box testing which tests without knowledge of internal design. Equivalence partitioning divides inputs into classes to minimize the number of test cases needed. Exhaustive testing all possible combinations is not feasible due to the large number of tests required. Risk-based testing prioritizes testing based on risk to the system to determine what to test first and most thoroughly.
A refactoring of my earlier presentation targeted more towards Solr, and less on testing. I'm basing my presentation for Lucene EuroCon on this. Would love some more feedback!
The Optimisation Grand Unified Theory @ ConversionXL LiveConversionista
This document provides an overview of conversion optimization techniques and best practices. It discusses the importance of running targeted experiments to test hypotheses and get the right data needed. Specific techniques are presented for increasing relevance, trust, value, ease of use, and assurance at different stages of a visitor's journey. Testing velocity and strength of hypotheses impact compound uplift potential. Regular, well-designed testing allows optimizing multiple small improvements each year for significant gains over time.
Means different things to different people.
Whole team approach
Just doesn’t mean testing on agile projects.
It’s not synonym to working extra hours, creating chaos, putting pressure.
Test each increment of coding as soon as its finished.
Quick feedback system enabler.
Collaboration is mightier than anything else.
One size doesn’t fit all.
Context is King.
Break to build - the mindset of the modern day testerViktor Slavchev
I spent last couple of years performing, talking, writing and listening about software testing.
But what is software testing? I am told my job is to “break software”. But why break it, it looks good?! I like the programmers, they are my friends. And, as Michael Bolton says, “We don’t break software, it was already broken when we got it”.
I sure don’t break software for living, but I do something way better and much more satisfying - I break clichés about software testing.
So, my job as your guide in your journey in testing will be to break some clichés from the past in order to build the mindset of the modern tester.
Worst practices in software testing by the Testing trollViktor Slavchev
The document discusses best and worst practices in software testing according to a mythical testing creature called "The Testing Troll".
Some worst practices presented include relying solely on documentation, focusing only on repetitive regression testing of old tests, viewing automation as a replacement for human testers, and strictly following best practices without consideration of context.
The best practices emphasized thinking beyond requirements and oracles, prioritizing regression tests that reveal new information, using automation as a tool to enhance testing abilities rather than replace testers, providing information about potential risks, and being aware of testing context in different situations. The conclusion is that there are no absolute best practices and testers must be skeptical professionals who consider context.
The Snail Entrepreneur: The 7-year-old kid every startup should learn fromClaudio Perrone
Matteo faced a seemly impossible problem, but didn't give up. He used daddy's #PopcornFlow and pivoted. 17 options and 5 experiments later, he converged to success.
PopcornFlow is impacting businesses (large and small) but also families and kids.
If you like this story, please contribute to Matteo's cause.
This document provides lessons learned about automating API testing. It discusses what an API is and reasons for automating API tests, including getting clear responses close to the client experience. The document outlines different types of tests to automate, including status code checks, structure checks, and scenario checks. It also discusses cognitive barriers to testing, test setup and environment, using an automation framework, and splitting test logic from application logic. The goal is not to provide a how-to guide but rather discuss lessons learned about effective API testing.
The 5-Whys technique is used to determine the root cause of a problem by repeatedly asking "Why?" It originated in Japan in the 1920s at Toyota. One starts with a problem statement and asks "Why did this happen?" Four more times to get to the underlying root cause. Examples show tracing a server being down to the root cause being that the CEO was on holiday. The technique involves bringing people together, writing down results, and considering multiple possibilities if there is no agreement on the root cause. Addressing the root cause provides more benefit than just treating symptoms.
Elisabeth Hendrickson’s book, Explore It!, contains this definition: “Tested = Checked + Explored”. When I read it, I was fascinated. “What does that mean?”, I asked myself, “what does it /really/ mean?”
This talk described the journey I undertook to understand it, and other definitions of testing that I found along the way, and then to come up with a new definition that filled the gaps I saw in the others, without losing the aspects of them that I felt were valid and useful.
Essentially, I formalised what testing is for me. And, now that I have my definition, I can ask myself in any given situation whether my actions are consistent with the way I believe I want to behave.
In this webinar, Kevin looks at 10 simple improvements in the way we work which mean that we’re now either catching bugs before they get to the test environment, or, even better, preventing them from happening in the first place.
View webinar recording - https://testhuddle.com/resource/striving-zero-bugs-test-environment/
This document discusses various cognitive biases that can affect testing, including anchoring bias, relativity bias, decoy effect, endowment effect, IKEA effect, confirmation bias, negativity bias, sunk costs fallacy, paradox of choice, and procrastination. It provides examples of how each bias could influence test planning, design, advocacy, estimation, negotiation, and other aspects of the testing process. The document concludes by acknowledging references used to research cognitive biases.
This is a summary of the blogs by Eric Ries on the Five Whys at http://startuplessonslearned.blogspot.com/2008/11/five-whys.html. It was used for an internal presentation at Cogent Consulting. If Eric or anyone else thinks this should not be public I will take it down, but I hope I'll drive (a little) more traffic to his blog :-)
The document introduces Ady Stokes, who has nearly 2 decades of experience in software testing, quality assurance, and risk management. It provides details about Ady's current role managing QA and risk at HML, a highly regulated financial organization with over 42 billion in assets under management. The document then discusses various aspects of software testing such as the importance of having a testing mindset, different types of tests, and qualities of good requirements.
Things Could Get Worse: Ideas About Regression TestingTechWell
Michael Bolton, DevelopSense
Tester, consultant, and trainer Michael Bolton is the coauthor (with James Bach) of Rapid Software Testing, a course that presents a methodology and mindset for testing software expertly in uncertain conditions and under extreme time pressure. Michael is a leader in the context-driven software testing movement with twenty years of experience testing, developing, managing, and writing about software. Currently, he leads DevelopSense, a Toronto-based consultancy.
The document provides an overview of software testing fundamentals including:
1. It discusses key testing concepts like error, fault, failure and how testing helps build confidence and reduce costs. Testing aims to find faults and prove software meets requirements.
2. Testing challenges are discussed like the impossibility of exhaustive testing due to huge number of combinations. Prioritization is important given limited time.
3. Principles of testing are covered such as defects clustering, absence of errors fallacy, and how early testing avoids fault multiplication. Testing must be context dependent.
Fantastic Tests - The Crimes of Bad Test DesignWinston Laoh
The document discusses why testing is important and what causes bad tests. Testing provides comfort and confidence by knowing if new code breaks existing functionality. Bad tests have a wide scope so it's unclear what they test, depend on specific orders or system states, are too similar to the product without the same fragility, or are not similar enough to the user experience. The key is for tests to be independent of order or system state and balance similarity to the product with enough differences to reliably fail when bugs are present.
[CXL Live 16] The Grand Unified Theory of Conversion Optimization by John EkmanCXL
We have come to earth to save humans from bad conversion rates and websites that don't convert visitors. The document discusses the grand unified theory of conversion rate optimization and introduces concepts like the optimization wheel and testing frameworks. It emphasizes testing hypotheses with real data and running many small tests per year to continuously improve conversions through experimentation.
The document summarizes a presentation on myths and dogmas related to usability testing. It discusses several commonly held beliefs and presents data from previous studies to evaluate whether they are truths or myths. Some of the myths addressed include the idea that five users are enough to find most usability problems, that expert reviews are as reliable as usability tests, and that usability testing can be conducted by anyone. The presentation aims to dispel unsupported notions and emphasize the importance of using empirical data to inform best practices in usability work.
Beyond Agile Testing to Lean Development — Rakuten Technology ConferenceJames Coplien
The document discusses moving beyond traditional agile testing approaches to lean testing. It argues that most unit tests are unnecessary and test scenarios that will never occur. It promotes exploratory and experience-based testing over unit testing alone. The document also advocates for shipping tests with code to catch bugs in the field, using assertions to make code more readable, and taking a lean approach where fixing testing processes is prioritized over fixing individual bugs.
Testing is necessary to identify correctness, completeness, and quality of software. There are various techniques for testing software such as white box testing which tests internal logic and structure, and black box testing which tests without knowledge of internal design. Equivalence partitioning divides inputs into classes to minimize the number of test cases needed. Exhaustive testing all possible combinations is not feasible due to the large number of tests required. Risk-based testing prioritizes testing based on risk to the system to determine what to test first and most thoroughly.
A refactoring of my earlier presentation targeted more towards Solr, and less on testing. I'm basing my presentation for Lucene EuroCon on this. Would love some more feedback!
The Optimisation Grand Unified Theory @ ConversionXL LiveConversionista
This document provides an overview of conversion optimization techniques and best practices. It discusses the importance of running targeted experiments to test hypotheses and get the right data needed. Specific techniques are presented for increasing relevance, trust, value, ease of use, and assurance at different stages of a visitor's journey. Testing velocity and strength of hypotheses impact compound uplift potential. Regular, well-designed testing allows optimizing multiple small improvements each year for significant gains over time.
Projects fail because they don’t test. Some fail because they test the wrong things. Others fail because they test too much. In this session, an enterprise consultant turned startup entrepreneur will share project case studies in testing atrocities and what can be learned from them. You’ll come away questioning your own testing. Check your dogma and let’s build better software.
Check out TrackJS JavaScript Error Monitoring
https://trackjs.com/javascript?utm_source=slides&utm_medium=slideshare&utm_term=testing
Pragmatic Not Dogmatic TDD Agile2012 by Joseph Yoder and Rebecca Wirfs-BrockJoseph Yoder
This presentation challenges the "norm" for TDD. Testing should be an integral part of your daily programming practice. But you don’t always need to derive your code via many test-code-revise-retest cycles to be test-driven. Some find it more natural to outline a related set of tests first, and use those test scenarios to guide them as they write code. Once they’ve completed a “good enough” implementation that supports the test scenarios, they then write those tests and incrementally fix any bugs as they go. As long as you don’t write hundreds of lines of code without any testing, there isn’t a single best way to be Test Driven. There’s a lot to becoming proficient at TDD. Developing automated test suites, refactoring and reworking tests to eliminate duplication, and testing for exceptional conditions, are just a few. Additionally, acceptance tests, smoke tests, integration, performance and load tests support incremental development as well. If all this testing sounds like too much work, well…let’s be practical. Testing shouldn’t be done just for testing’s sake. Instead, the tests you write should give you leverage to confidently change and evolve your code base and validate the requirements of the system. That’s why it is important to know what to test, what not to test, and when to stop testing.
This document discusses rethinking test-driven development (TDD) in a pragmatic way. It questions some common beliefs around TDD, such as whether tests always need to be written first or if they guarantee good design. While tests can help focus development and allow safer changes, they may also constrain refactoring and evolution. A pragmatic approach to TDD uses tests to enhance practices but considers more techniques than just unit tests to validate requirements are met. It emphasizes combining techniques and avoiding dogmatic views of TDD.
Testing is necessary to identify correctness, completeness, and quality of software. There are various testing techniques including white box testing which tests internal logic and structure based on code coverage, and black box testing which tests functionality based on requirements without knowledge of internal design. Equivalence partitioning is a black box technique that divides inputs into equivalence classes of data expected to produce the same outputs, helping minimize the number of test cases needed. How much testing is enough depends on risk factors like potential costs of failures. Exhaustive testing testing all combinations is not feasible due to the vast number of possibilities.
Webinar: Experimentation & Product Management by Indeed Product LeadProduct School
Main Takeaways:
- Why should I run experiments as a Product Manager?
- How long should I run experiments?
- How do I interpret Experiment results and take low-risk decisions?
Fact or Fiction? What Software Analytics Can Do For UsAndy Zaidman
This document summarizes findings from software analytics research on developer testing practices. It finds that developers overestimate the amount of time spent on testing, with most spending 25-75% of their time on test code compared to an estimated 50%. Tests are rarely executed in IDEs, with only 20% failing, compared to 60% failing in IDEs. Most projects have test code but over half of developers did not interact with tests. Testing is crucial for continuous integration, with 98% of projects failing builds when tests fail. The research helps developers understand their own behaviors and identifies challenges for improving tools and education.
Introduction to Usability Testing: The DIY Approach - GA, London January 13th...Evgenia (Jenny) Grinblo
The slides from my General Assembly workshop on January 13th, 2013 (https://generalassemb.ly/education/introduction-to-usability-testing-the-diy-approach)
ABOUT THIS WORKSHOP
Usability testing can quickly uncover areas of an interface that frustrate users and hurt business goals but many teams put it off due to budget, time, or training concerns.
This workshops will take you through a do-it-yourself approach to usability testing. We'll cover the basics (benefits, recruiting, and how to plan a test), learn how to facilitate a test to get reliable results, and how to use the testing results to move usability improvements forward. You'll walk away with the tools to hold a complete usability testing right away.
TAKEAWAYS
Learn why and when to hold usability testing
Learn practical tools and methods to overcome time, budget or training concerns that block user testing from happening
Shift the conversation from opinions and hunches to proven usability problems that your team can solve together
The document describes the author's journey as a software developer and how they learned to release high quality software frequently through adopting test-driven development and other practices. It starts with the author developing software without tests, which led to bugs and maintenance issues. They then learned about unit testing and test-first development, which improved code quality and reduced bugs. Later, they added integration, UI, and behavior-driven tests. Adopting continuous integration and continuous delivery allowed for automated testing and frequent releases. This approach helped catch bugs, improve communication, and deliver working software more efficiently.
[QE 2018] Paul Gerrard – Automating Assurance: Tools, Collaboration and DevOpsFuture Processing
Paul Gerrard discusses the future of testing and automation in an environment focused on digital transformation and continuous delivery. He argues that the traditional testing models are no longer relevant and proposes a new model of testing focused on exploration, judgment, and building test models from various sources of knowledge. Under this new model, all testing is seen as exploratory in nature. Gerrard also emphasizes the importance of shifting testing activities left in the development process through early collaboration to help address issues with requirements. Automation is framed as only one part of the overall testing process and trust in automation requires proactive efforts to reduce doubts through addressing underlying issues identified earlier in development.
Automating The New York Times Crossword by Phil WellsSauce Labs
The New York Times crossword grid is made up of hundreds of individual web elements. Automating game logic via the puzzle interface is a daunting technical (and logical) task. Find out how the New York Times Games team uses Webdriver.io, cheerio.js, event listeners, and Sauce Labs to deliver quality crosswords while continuously improving.
Similar to Testing in a Continuous Delivery World - LondonCD Meetup - May 2014 (20)
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3Data Hops
Free A4 downloadable and printable Cyber Security, Social Engineering Safety and security Training Posters . Promote security awareness in the home or workplace. Lock them Out From training providers datahops.com
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
12. What were we doing?
! Checking the result
!
!
!
! Testing the result
12
13. Explore It!: Reduce
Risk and Increase
Confidence with
Exploratory Testing
Elisabeth Hendrickson
13
14. ! You can automate 100% of your system
testing
!
! Automation will guarantee your system
keeps working - after all you are
testing literally everything
14
Some Myths about Automation
15. ! You can never test everything to prove
it is working
!
! It takes just one test to prove that a
system is broken
15
Does it work?
16. ! Not working according to requirements
! Doesn’t meet the user’s expectations
! Difficult to use
! Inconsistent
! Doesn’t scale
! Can’t be maintained
! Difficult to test
! …
16
What is a bug?
17. 17
The (old) Songkick Way
Automated
Test and build
Manual Test
Feature &
ReleaseQueued
for
commit
Commit Deploy
28. 28
!
! Test at the right level
! Everyone understands what is being
tested and why
! Everyone has a chance to influence
what should be tested
! Fast feedback
Shared ownership of Automated
tests
29. 29
!
!
! What are we hoping to achieve with
this change?
! Identify risks
! Agree on how to mitigate risks
Risk Assess Everything
30. 30
The Songkick Way - low risk
Automated
Test and
build
Commit DeployAutomated
Regression
31. 31
The Songkick Way - medium
risk
Automated
Test and
build
Manual
Test on
Dev env
Commit DeployAutomated
Regression
32. 32
The Songkick Way - high risk
Automated
Test and
build
Manual
Test on
Dev env
Commit DeployAutomated
Regression
Manual
Test on
Prod env
41. But this ended up being about
more than just releases…
! Everyone cares about testing
! Fast and maintained automated tests
! Bug fixes often take just minutes
42. In Summary
! You must automate
! But don’t try to automate everything
! The automation you have doesn’t
necessarily make you safe
! Use problems to drive positive change
! Don’t neglect the human aspect