Workshop delivered at Test Academy Barcelona on 30th January 2020. Including the Team Test for Testability, Testability Tactics, Testing Smells and the CODS Model.
TDD vs. ATDD - What, Why, Which, When & WhereDaniel Davis
This is a slide deck for a discussion about Test Driven Development (TDD) and Acceptance Test Driven Development (ATDD) and starting to explore the differences between them. Get some insight into why we use them and the advantages and disadvantages of both, as well as, get a better understanding of which should be used where and when. By the end of the session you should be well along the path to TDD vs. ATDD enlightenment.
Jarian van de Laar - Test Policy - Test Strategy TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Test Policy - Test Strategy by Jarian van de Laar. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
TDD vs. ATDD - What, Why, Which, When & WhereDaniel Davis
This is a slide deck for a discussion about Test Driven Development (TDD) and Acceptance Test Driven Development (ATDD) and starting to explore the differences between them. Get some insight into why we use them and the advantages and disadvantages of both, as well as, get a better understanding of which should be used where and when. By the end of the session you should be well along the path to TDD vs. ATDD enlightenment.
Jarian van de Laar - Test Policy - Test Strategy TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Test Policy - Test Strategy by Jarian van de Laar. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
End-to-End Test Automation for Both Horizontal and Vertical ScaleErdem YILDIRIM
Slides from my talk at Selenium Camp Test Automation Conference - 2017
https://seleniumcamp.com/talk/end-to-end-test-automation-for-both-horizontal-and-vertical-scale/
Test automation (TA) activity has become a key critical work to guarantee the quality of system under test (SUT) by driving test and also development effort effectively. To bring this efficiency to projects, companies are investing on TA projects in a more motivated way. The question here is how we should design the automation strategy to handle complex TA projects together effectively. It can be done by automating test scenarios as E2E (end to end). Vertical E2E TA consists of; automating Test Data Preparation Phase and Unit, Integration and UI tests. For horizontal E2E TA; UI and Integration test cases, which are automated, designed as integrated real user scenarios. I will tell about the prerequisites, principles and key factors to have E2E automated tests. And also I will share hands on experienced E2E test automation projects that Selenium was the key tool.
REST Assured is a Java library .Which, provides a domain-specific language (DSL) for writing powerful, maintainable tests for Restful APIs.
Library behaves like a headless Client to access REST web services.
Rest-assured is a 100% java-based, BDD style, test library that you can use for testing REST api's in java projects. These are the slides from the presentation and demo I give at the 2017 #JBCNConf Java conference in Barcelona.
회사 내부 교육(소개)용으로 만든
현재 팀이 일하고 있는 프로세스를 정리한 자료 - 의료 소프트웨어 제품
- Dev & Test Process(V-Model)
1) Review / Planning
2) Test Execution
3) Other types of Testing
4) Release / Operation
Testability is Everyone's ResponsibilityAsh Winter
Testability is a first class concern for all disciplines within software development. There, I said it. No hedging, no nebulous phrasing, maybes or it depends.
Too often we labour under systems that are hard to test, manifesting themselves with frantic searches for more testers, lengthy acceptance test runs, fearful testing for regressions with a hopeful release at the end. Worst of all, it usually ends up with a project manager sat on the testers desk asking 'when will testing be done.' It's never done, it can only stop, just so you know.
Throughout my career, often the testability of a system has been deemed to be the testers concern. If something was hard to test, then it was the testers problem. However, the causes of low testability effect the activities of all disciplines, whether it be speed of feedback to developers or flow of value generating features for product managers.
During the talk, we will cover:
* How testability is a key advantage in building systems of ever increasing complexity.
* Why it's important for developers and operational stakeholders to build inherently testable systems.
* What testers can do to be catalysts for testability improvements.
The activity of testing is rarely the bottleneck, how testable your system is is your problem. Poor testability cannot be remedied by one discipline alone. It's for all of us to care about.
End-to-End Test Automation for Both Horizontal and Vertical ScaleErdem YILDIRIM
Slides from my talk at Selenium Camp Test Automation Conference - 2017
https://seleniumcamp.com/talk/end-to-end-test-automation-for-both-horizontal-and-vertical-scale/
Test automation (TA) activity has become a key critical work to guarantee the quality of system under test (SUT) by driving test and also development effort effectively. To bring this efficiency to projects, companies are investing on TA projects in a more motivated way. The question here is how we should design the automation strategy to handle complex TA projects together effectively. It can be done by automating test scenarios as E2E (end to end). Vertical E2E TA consists of; automating Test Data Preparation Phase and Unit, Integration and UI tests. For horizontal E2E TA; UI and Integration test cases, which are automated, designed as integrated real user scenarios. I will tell about the prerequisites, principles and key factors to have E2E automated tests. And also I will share hands on experienced E2E test automation projects that Selenium was the key tool.
REST Assured is a Java library .Which, provides a domain-specific language (DSL) for writing powerful, maintainable tests for Restful APIs.
Library behaves like a headless Client to access REST web services.
Rest-assured is a 100% java-based, BDD style, test library that you can use for testing REST api's in java projects. These are the slides from the presentation and demo I give at the 2017 #JBCNConf Java conference in Barcelona.
회사 내부 교육(소개)용으로 만든
현재 팀이 일하고 있는 프로세스를 정리한 자료 - 의료 소프트웨어 제품
- Dev & Test Process(V-Model)
1) Review / Planning
2) Test Execution
3) Other types of Testing
4) Release / Operation
Testability is Everyone's ResponsibilityAsh Winter
Testability is a first class concern for all disciplines within software development. There, I said it. No hedging, no nebulous phrasing, maybes or it depends.
Too often we labour under systems that are hard to test, manifesting themselves with frantic searches for more testers, lengthy acceptance test runs, fearful testing for regressions with a hopeful release at the end. Worst of all, it usually ends up with a project manager sat on the testers desk asking 'when will testing be done.' It's never done, it can only stop, just so you know.
Throughout my career, often the testability of a system has been deemed to be the testers concern. If something was hard to test, then it was the testers problem. However, the causes of low testability effect the activities of all disciplines, whether it be speed of feedback to developers or flow of value generating features for product managers.
During the talk, we will cover:
* How testability is a key advantage in building systems of ever increasing complexity.
* Why it's important for developers and operational stakeholders to build inherently testable systems.
* What testers can do to be catalysts for testability improvements.
The activity of testing is rarely the bottleneck, how testable your system is is your problem. Poor testability cannot be remedied by one discipline alone. It's for all of us to care about.
Alexander Podelko - Context-Driven Performance TestingNeotys_Partner
Since its beginning, the Performance Advisory Council aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members. For Neotys, to strengthen our position as a thought leader in load & performance testing. During this event, 12 participants convened in Chamonix (France) exploring several topics on the minds of today’s performance tester such as DevOps, Shift Left/Right, Test Automation, Blockchain and Artificial Intelligence.
Learn about the benefits of writing unit tests. You will spend less time fixing bugs and you will get a better design for your software. Some of the questions answered are:
Why should I, as a developer, write tests?
How can I improve the software design by writing tests?
How can I save time, by spending time writing tests?
When should I write unit tests and when should I write system tests?
Talk @Carmudi GmbH office on Unit testing basics and advanced concepts, like Arrange-Act-Assert rule, Unit Test anatomy, etc. In the end - a small overview of Test Driven Development.
Why Automated Testing Matters To DevOpsdpaulmerrill
“Automated testing is a pain in my ear! Why can’t QA get it right? Why do the tests keep breaking? And for Pete’s sake, stop blaming the infrastructure!”
…Ok, maybe you chose a different word than “ear”.
How often do you have thoughts like this? Daily?
Let’s talk about these frustrations, why they exist and how we can use them to improve our systems!
In this talk, Paul Merrill, founder and Principal Automation Engineer at Beaufort Fairmont explores why automated testing matters to DevOps. Join us to learn how automated testing can be a useful tool in the creation and release of your systems!
New Model Testing: A New Test Process and ToolTEST Huddle
In this webinar, Paul described his experiences of building and using a bot for paired testing and also propose a new test process suitable for both high integrity and agile environments. His bot – codenamed System Surveyor – builds a model of the system as you explore and captures test ideas, risks and questions and generates structured test documentation as a by-product.
A coaching aid for those who want to help others achieve greater testability within their development team and wider organisation. Additionally can be used to track your own journey.
Testability can make our testing lives so much better. But we need to sell it to those who can pay for the changes needed. Find out what they need (delivery, flow, stability, resilience), how it can be measured the use the handy examples below!
Testers Guide to the Illusions of Unit TestingAsh Winter
One area that testers might be able to enhance their contributions to software development teams is how we perceive and contribute to unit testing. Being able to influence this type of testing in a positive manner is a skill that testers will need to get to grips with, as more companies start to embrace a model of lone testers in cross functional teams. The shift of focus from primarily the testing that testers do, to the testing that the team does, is a key shift in thinking and behaviour.
To facilitate this shift, I believe testers busting their own illusions about this aspect of building something good would bring us much closer to developers and help us realise what other layers of testing can cover most effectively. The last point is pertinent here, as knowing and guiding unit testing brings the role of integration, acceptance and exploratory testing into sharp focus.
This is a topic that has always intrigued me, having predominantly worked as a single tester on a team for the last five or so years. I reached out to the community with the question “What do testers believe about unit testing?” and received a lot of engagement. The good users of Twitter added another 50 or so illusions that testers might have about this layer of testing. I figured that based on that level of engagement, maybe this would make an interesting talk! It wasn’t only testers who responded too, suggesting that there might be some shared illusions about unit testing that are cross disciplinary.
The list alone is interesting but now I would like to share my analysis of it with you, focusing on:
* Recurring themes within the list and how to address them as a tester or developer.
* Particular illusions to look out for with examples from my recent past.
* A guide for developers to engage with testers on unit testing, and testers with developers.
Lightning talk based on the 10 P's of Testability by Robert Meaney, talk designed by Ash Winter. Make your testing life better by embrace testability as a team.
One of my recent endeavours has been to create a "career model" for the testers within my organisation. I sat in my office at home and designed "the testing wheel". I wanted it to be simple, inclusive and offer questions but no answers.
Careers are long and winding. My own career has not been a linear progression, so building a linear model seemed wrong to me. But, this brought me into conflict with the ideas of others.
My experience report will show the highs and lows of this model after introducing it into the wild. People still tried to measure and rank with it, got mad people got when I refused to answer the questions it posed. Finally how it spread and entered battle with wider organisations career models.
This is a call to action. On our cross functional teams and during our devops transformations we talk about how testing is for the whole team. Quality is everyone's responsibility. How much are we really doing to make this happen? Often we are working on systems that are hard to test for many reasons, but if we simply do more testing, write more automation we are neglecting what should be our main mission, advocating for increasing levels of testability, to truly get everyone involved in testing. We all have stories about how something is difficult to test, often never being tested or certainly left with the tester to figure it out. It doesn't have to be this way.
During my talk, I want to introduce a set of principles for testability engineering. A new way to approach our work as testers. These principles will tackle how we make our systems more observable, controllable, how we share knowledge across teams and improve the testability of our dependencies. I believe it is time to create a new focus on testability, as it affects everything we do, what our teams do and beyond into how value is delivered for customers.
I want you to take away from the talk:
* Why a focus on testability can multiply your effectiveness as a tester
* What the principles of testability engineering are and how to advocate for them
* How you can make iterative changes to what you do in order to embrace testability
New technology and complexity is rendering many software development techniques and paradigms obsolete at an increasing rate. We already exist in a space where an infinite number of tests of an array of different types can be performed. A new mission is needed, one that leverages the varied talents of all kinds of testers and culminates in a new focus on the exponential benefits that testability brings.
A Testers Guide to the Myths, Legends and Tales of Unit TestingAsh Winter
One area that testers might be able to enhance their contributions to software development teams is how we perceive and contribute to unit testing. Being able to influence this type of testing in a positive manner is a skill that testers will need to get to grips with, as more companies start to embrace a model of lone testers in cross functional teams. The shift of focus from primarily the testing that testers do, to the testing that the team does, is a key shift in thinking and behaviour.
To facilitate this shift, I believe testers busting their own illusions about this aspect of building something good would bring us much closer to developers, and help us realise what other layers of testing can cover most effectively. The last point is pertinent here, as knowing and guiding unit testing brings the role of integration, acceptance and exploratory testing into sharp focus.
How often have you found a problem with your application which is directly related to the infrastructure it is deployed upon? How often have you found a problem and not known? Testing is beginning to reach new depths. In my experience, saying that infrastructure needed to be tested used to trigger disbelieving looks in Ops teams faces, but less so now. The lines between infrastructure and code are blurring, so lets update our skills and outlook accordingly.
* What I mean by infrastructure and why it is important to apply a testing mindset to this area
* Questions to determine what to test, how to organise those thoughts and techniques that might be applied
* A selection of tools to both explore and programatically check your infrastructure pre-deployment
Delivered at NWEWT 3 in Liverpool, using a testability focus to solve testing related problems at their root rather than their symptoms. Focusing on two metrics, "time to start testing" and "unplanned downtime."
Nobody /really/ likes change, its human nature. Testers have a special relationship with changing tools and techniques, they change and we tend to flounder a little and end up very nervous about our place in the new world. Continuous delivery is one such circumstance, I see and speak to many testers really struggling. However, with a significant shift in outlook and a chunk of personal development, testers can excel in environments such as these. It’s time to start to get out in front of a changing world, rather than always battling to catch up.
I want to share my experience of adding value as a tester in a continuous delivery environment, what new technologies and techniques I've learned, using your Production environment as an oracle, advocating testability and most crucially, not overestimating what our testing can achieve. Testing is not the only form of feedback, it’s time to let go of some the aspects of testing we cling to.
Continuous delivery adds richness and variety to our role as testers. To me, it is a facilitator for the autonomy and respect that testers have craved for a long time, so let’s get involved...
No one paying attention to your test strategy? Its too long. More crucially, it has no scrolls. Here's a template for the agile testing quadrants, but with scrolls for extra unforgettability.
One of my recent endeavours has been to create a "career model" for the testers within my organisation. I sat in my office at home and designed "the testing wheel". I wanted it to be simple, inclusive and offer questions but few answers.
Careers are long and winding. My own career has not been a linear progression, so building a linear model seemed wrong to me. But, this brought me into conflict with the ideas of others.
My experience report will show the highs and lows of this model after introducing it into the wild. People still tried to measure and rank with it, got mad people got when I refused to answer the questions it posed. Finally how it spread and entered battle with wider organisations career models.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
2. • Distracted by local
optima…
• Where is the
biggest gain to be
made in the system?
• Your architecture
silly!
Testers. Always chasing the feature
squirrel…
@northern_tester
3. @northern_tester
If the architectural testability of your system is
poor, it means…
…it won’t get tested
…the tester will test it
…it won’t work
…it will be hard to operate
…it will be frustrating to use
…it will make your team sad
…it will create bottlenecks
…it will make less money
4. What we will cover…
@northern_tester
• Team Testability
Test
• Your testing
smells
• CODS model
• Improving your
world
5. Testability is
just a vague
concept
@northern_tester
Testability is
only a
system
property
10. Software testability is the degree to which a
software artefact supports testing in a given test
context. If the testability of the software artefact is
high, then finding faults in the system by means of
testing is easier.
@northern_tester
18. Testing your team for
testability…
@northern_tester
Without them suspecting a thing…
19. @northern_tester
Who is this…
• It’s Joel Spolsky
• Writer of blogs at
joelonsoftware.com
• FogBugz
• Trello
• Co-founder of Stack
Overflow
• Most importantly though,
The Joel Test!
20. @northern_tester
WTF…
• The Joel Test
• 12 Steps to Better Code
• Not all patterns but teams,
working conditions etc.
• Insight without the maturity
model…
21. @northern_tester
Test and Testability
• Your testing tells you
about your testability
• You can cover a lot of
ground
• While not really talking
about testability
• Just talking about the
testing you do already
23. @northern_tester
The Team Test
• Yes or No
• Fast
• Social aspects
• Technical aspects
• Retrospective
• Team similarities &
differences
24. Learning
@northern_tester
• Testability challenges manifest in our
testing but testing may not be the root
• Talking about testability too early can
cause engagement problems
• Use existing ceremonies to gather
testability information
• Might expose ‘easy wins’ such as access
problems
26. Architectural testability allows your team to create tests that
provide valuable feedback within minutes, if not seconds.
Each team member can run a set of tests every time they
make a change. This feedback provides a safety net that
encourages your team members to make changes frequently
without fear. Confidence in testing encourages refactoring
and continuous improvement of the code.
@northern_tester
28. What does good look like?
@northern_tester
• Embraces change
• Decoupled but
cohesive
• Minimizes waste
• Sustainable
• Relationships
29. Good memories…
@northern_tester
• Right size microservices, responsible for just enough
• Each service has a mock for early testing by
consumers
• Code instrumented with structured, contextual events
• Consistent API docs expressed in Swagger
30. What does bad look like?
@northern_tester
• Slows as complexity
grows
• Technical debt pooling
• Untested areas
• Side effects
• Persistent team siloing
31. Painful memories…
@northern_tester
• Three types of consumer, SOAP, batch, web
• Mixed technologies - LAMP but with Microsoft SQL Server
• Two minutes for new application test environment, two
weeks for the database
• Different logging and monitoring tooling by layer
administered by different teams.
32. @northern_tester
Retrofitting
• Is really, really hard
• Architectures act as
anchors
• Have you ever tried to
change an architecture
for testing?
• Tiny changes to move
the needle
34. Controllability…
@northern_tester
• You can set the system
into its important states
• Identify and control
important variables
Configuration
• Control the environment
the system resides in
Test data
35. Observability…
@northern_tester
• Infer internal state from
external outputs
• Logs - a message
• Metrics – a measure
• Events – a collection for
a unit of work
• Instrument for the
information we want
37. Simplicity…
@northern_tester
• How easy the system is to
understand
• Consumers, inputs,
outputs, interactions,
technologies, protocols,
configurations
• Transparency
• Cognitive load again
43. @northern_tester
My mobile testing story…
Pull a feature branch, build a development environment, configure
relevant external feeds, find device, join a specific Wi-Fi network,
change DNS, configure a local proxy to intercept and change certain
headers and URI's.
45. What smells do you
recognize?
@northern_tester
Release
Management
Theatre
Mono
strategies
Fear of
change
Teams
looking for
more testers
Too many
user
interface
tests
Valuable
scenarios
not tested
Lack of
resilience
testing
“Sunny”
days only
Cluttered
logging
with no
insights
Excessive
test
repetition
Issues hard to
isolate and
debug
Tests that
don’t die
Lengthy
build
times
Too many
persistent
environments
Environments no
one cares about
Inanimate
documentation
Customer
hand
holding
Poor relations with
Ops
Long lists of
deferred bugs
Hard to test
dependencies
Wrangling
over scale
95th
Percentile?
Tester
turnover
47. Experiencing
@northern_tester
• High maintenance automation due to dependencies
• Hard to debug and isolate problems
• Mock two key external services
• Unique tracing id throughout their logging
48. Doing
@northern_tester
• Rate each smell for
your current context
based on the handout
• If you have friends here
from your company, get
together.
49. Learning
@northern_tester
• Causes are not obvious
• The whole architecture is the target
area, not just the app.
• Looking for smells can yield testability
improvements
• Common smells and unique smells…
62. Learning
@northern_tester
• Smells give us clues on where to focus
• CODS provides a framework for
change
• Visualizing is great for advocacy
• Choose changes carefully…
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.
2000 bugs in 2 years
Communicated through tickets
Longs test cycles against builds on long lived environments
Mastered weirdly named tooling “Quality Centre”
Left with a weird feeling - we did tons of testing, but we never got any faster, no one got what they wanted…
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.