This is a call to action. On our cross functional teams and during our devops transformations we talk about how testing is for the whole team. Quality is everyone's responsibility. How much are we really doing to make this happen? Often we are working on systems that are hard to test for many reasons, but if we simply do more testing, write more automation we are neglecting what should be our main mission, advocating for increasing levels of testability, to truly get everyone involved in testing. We all have stories about how something is difficult to test, often never being tested or certainly left with the tester to figure it out. It doesn't have to be this way.
During my talk, I want to introduce a set of principles for testability engineering. A new way to approach our work as testers. These principles will tackle how we make our systems more observable, controllable, how we share knowledge across teams and improve the testability of our dependencies. I believe it is time to create a new focus on testability, as it affects everything we do, what our teams do and beyond into how value is delivered for customers.
I want you to take away from the talk:
* Why a focus on testability can multiply your effectiveness as a tester
* What the principles of testability engineering are and how to advocate for them
* How you can make iterative changes to what you do in order to embrace testability
New technology and complexity is rendering many software development techniques and paradigms obsolete at an increasing rate. We already exist in a space where an infinite number of tests of an array of different types can be performed. A new mission is needed, one that leverages the varied talents of all kinds of testers and culminates in a new focus on the exponential benefits that testability brings.
The document discusses the concept of a "walking skeleton" which is a minimal implementation of a system that links the main architectural components and proves the chosen architecture through a small end-to-end functionality. It should use the final architecture and allow validating the architecture early in the development process. A walking skeleton example for a video sharing service is described that would implement basic functionality for uploading, storing, and viewing videos to prove the suitability of the chosen technologies.
This document outlines the agenda and objectives for a DevOps transformation workshop. The workshop will cover DevOps foundations, including value stream mapping exercises. It will define DevOps and discuss how to map the current software delivery lifecycle. Key aspects like cycle time, bottlenecks, wait times and processing times will be examined. The workshop aims to help organizations identify inefficiencies and develop future state solutions to reduce cycle times and implement DevOps best practices.
TestDriven Development, Why How and SmellsProwareness
These are the slides used in our Mastering Agile Development session in September 2012. It gives some insights into the why, how and smells of doing TestDriven Development
Presentation on World Conference on Next Generation Testing 2015 conducted at Bangalore,India on ' Anti Patterns of Testing for Continuous Delivery adoption'. This presentation covered the anti patterns of testing during the continuous delivery adoption. It covered the details related to Collaboration, test automation architecture, test suite organization, lack of test strategy for parallel execution in the build pipeline, test code issues and some others.It also highlighted how it is difficult for any team to adopt continuous delivery in true sense and and further move into the continuous deployment and Devops practices due to such challenges.
This document discusses the concept of a "walking skeleton", which is a minimal implementation of a software system that links together the main architectural components and allows the architecture and functionality to evolve together. It provides examples of challenges in implementing a walking skeleton for a claims revamp project at a company. The walking skeleton started as more of a spike but helped identify technical challenges and created a foundation to resume work later.
This document discusses how organizations can prepare for disasters by making failures routine through practices like chaos engineering and active monitoring. It recommends:
1) Practicing failure scenarios as a team to improve disaster response procedures and checklists.
2) Introducing variables like server crashes and network issues in production environments using chaos engineering to test resiliency.
3) Monitoring application health over time to detect performance trends that could indicate future issues.
4) Treating failures as routine events by having established procedures and checklists to minimize panic and focus response efforts.
Performance Metrics for your Delivery Pipeline - Wolfgang GottesheimJAXLondon2014
The document discusses the importance of measuring performance metrics throughout the software development lifecycle from testing to production. It argues that performance should be a quality gate for software releases. Key points made include that unit and integration tests could measure performance metrics; acceptance and load tests are important for performance but are later in the cycle; and performance should be incorporated into continuous delivery processes with automated collection and analysis of metrics across builds.
This document discusses implementing team wide testing to improve quality and reduce bugs. It describes current problems like developers feeling pressure to deliver code quickly without proper testing. This leads to bugs found later by testers, wasting time on rework. The team analyzed root causes like lack of test automation and testers. They decided to break down silos between developers and testers. The new process involves test-driven development, continuous testing, and demos at quality gates. While not all user stories were completed, the delivered stories had no bugs found by clients, showing the new process improved quality.
The document discusses the concept of a "walking skeleton" which is a minimal implementation of a system that links the main architectural components and proves the chosen architecture through a small end-to-end functionality. It should use the final architecture and allow validating the architecture early in the development process. A walking skeleton example for a video sharing service is described that would implement basic functionality for uploading, storing, and viewing videos to prove the suitability of the chosen technologies.
This document outlines the agenda and objectives for a DevOps transformation workshop. The workshop will cover DevOps foundations, including value stream mapping exercises. It will define DevOps and discuss how to map the current software delivery lifecycle. Key aspects like cycle time, bottlenecks, wait times and processing times will be examined. The workshop aims to help organizations identify inefficiencies and develop future state solutions to reduce cycle times and implement DevOps best practices.
TestDriven Development, Why How and SmellsProwareness
These are the slides used in our Mastering Agile Development session in September 2012. It gives some insights into the why, how and smells of doing TestDriven Development
Presentation on World Conference on Next Generation Testing 2015 conducted at Bangalore,India on ' Anti Patterns of Testing for Continuous Delivery adoption'. This presentation covered the anti patterns of testing during the continuous delivery adoption. It covered the details related to Collaboration, test automation architecture, test suite organization, lack of test strategy for parallel execution in the build pipeline, test code issues and some others.It also highlighted how it is difficult for any team to adopt continuous delivery in true sense and and further move into the continuous deployment and Devops practices due to such challenges.
This document discusses the concept of a "walking skeleton", which is a minimal implementation of a software system that links together the main architectural components and allows the architecture and functionality to evolve together. It provides examples of challenges in implementing a walking skeleton for a claims revamp project at a company. The walking skeleton started as more of a spike but helped identify technical challenges and created a foundation to resume work later.
This document discusses how organizations can prepare for disasters by making failures routine through practices like chaos engineering and active monitoring. It recommends:
1) Practicing failure scenarios as a team to improve disaster response procedures and checklists.
2) Introducing variables like server crashes and network issues in production environments using chaos engineering to test resiliency.
3) Monitoring application health over time to detect performance trends that could indicate future issues.
4) Treating failures as routine events by having established procedures and checklists to minimize panic and focus response efforts.
Performance Metrics for your Delivery Pipeline - Wolfgang GottesheimJAXLondon2014
The document discusses the importance of measuring performance metrics throughout the software development lifecycle from testing to production. It argues that performance should be a quality gate for software releases. Key points made include that unit and integration tests could measure performance metrics; acceptance and load tests are important for performance but are later in the cycle; and performance should be incorporated into continuous delivery processes with automated collection and analysis of metrics across builds.
This document discusses implementing team wide testing to improve quality and reduce bugs. It describes current problems like developers feeling pressure to deliver code quickly without proper testing. This leads to bugs found later by testers, wasting time on rework. The team analyzed root causes like lack of test automation and testers. They decided to break down silos between developers and testers. The new process involves test-driven development, continuous testing, and demos at quality gates. While not all user stories were completed, the delivered stories had no bugs found by clients, showing the new process improved quality.
This document discusses test automation challenges at an investment bank and lessons learned. It outlines problems with lengthy manual regression testing. An attempt was made to use Jameleon for test automation but it caused issues. They identified needs for metrics, definitions of done, and separating test connections. Recommendations include using tools like Selenium and SoapUI with a Jenkins/JIRA setup. While quick wins are possible, separating test connections and fully defining requirements are important for successful test automation.
This document discusses dependencies between agile teams and how to manage them. It defines dependencies as blockers that can kill agility. Dependencies are created both externally from other teams and internally from things like technical debt or processes. The document recommends testing your own components independently through unit testing to avoid dependencies. It also discusses different types of automated tests including unit, integration, acceptance, and exploratory tests and when each is appropriate. Continuous integration is recommended to catch issues early. Simulators and mocks can be used to reduce dependencies when promoting code to production.
Maybe developer testing is more trouble than it's worth. Teams have found writing tests to be a hinderance. Then they reduce testing in order to deliver "faster". They feel that tests actually make changing the code and fixing bugs harder. They might be right.
see also: https://www.linkedin.com/pulse/testing-cable-chain-mark-windholtz/
Agile Testing in Enterprise: Way to transform - SQA Days 2014Andrey Rebrov
This document discusses problems that can occur with traditional testing approaches and how to transition to agile testing practices. It provides two examples of organizations that struggled with long regression cycles, missed estimates, low quality and stress. The root causes are identified as document-based collaboration, lack of testing knowledge by developers, and infrastructure management chaos. Recommendations are made to use Kanban, collaborate on requirements, implement smart metrics, test automation, and a DevOps approach. Specific practices that were implemented include risk management, specification by example, test-driven development, continuous integration, configuration automation, and test automation. The results were increased delivery rates up to 5 times, zero bugs in production, no overtime, and more enjoyable work.
PuppetConf 2017: Test Driving Your Infrastructure with Jesus Alvarez & Jesse ...Puppet
What's the best part of starting on something new? You get the opportunity to take everything you've learned, and make something better. As a system grows over time, one of the biggest challenges we face is ensuring it’s as stable tomorrow as it was yesterday. As part of a move from physical data centers to AWS over the past year we were given the opportunity to bring together a group of people with varied experience writing software and designing complex infrastructures and systems to solve this problem. Our collective experiences led us to make testing and automation central to everything we did. We’ve achieved this using a variety of tools to such as Terraform, Puppet, Test Kitchen, Serverspec, Jenkins, Docker, and even some custom written libraries to fill in the gaps. There are a lot of moving pieces to track when deploying and configuring infrastructure and applications. This talk will focus on our solution. If you’d like to hear about how we’ve orchestrated standing up, and maintaining an environment, ensuring its stability by testing along the way, this talk will be for you. We will discuss the tools we’ve used, and what problems they’ve allowed us to solve as we’ve moved through this migration. We’ll discuss what has been great as we’ve progressed, and what could have gone better. We’ll talk about what other tools exist on the market, and why we made the decisions we did along the way.
We are entering a world where everything must be done quicker. You must deliver code faster. You must deploy faster. How can you deliver and deploy faster without compromising your professionalism? How can you be sure you are delivering what your client has asked you?
In short, testing is the only way to be sure you’re delivering what someone asked you to. Often we use BDD Tools such as FitNesse which gained popularity over the recent years
There are a number of integration / BDD test tools out there that help you deliver a high quality software through tests. Its easy to pick up any tool from just their tutorials and start writing tests. But as I found out the hard way, this can quickly spiral into a state where the tests are giving you and your team hell and are worth less than the value the tests are delivering.
Using FitNesse and Junit as examples, I will share things that I have learnt working on large enterprise and vendor systems and help you avoid your own path to hell.
How engineering practices help businessAndrey Rebrov
This document provides advice on how to introduce new engineering practices and technologies to a team or business. It discusses several examples of proposed new practices and technologies such as test automation, continuous integration, refactoring, and DevOps. For each, it advises how to demonstrate the benefits through examples and metrics, how to gain buy-in from various stakeholders, and pitfalls to avoid such as claiming a practice is necessary just because a famous person recommends it. The overall message is that new practices must provide clear value and be introduced through demonstration and collaboration rather than dictates.
Slides from Jesper Ottosen's 2017 Fall OnlineTestConf session – Shifting is more than shift left.
Change is happening to the testing activities. Shift-left automates and codifies the testing activities. Shift-right does it for production.
This session was about a couple of other trends, changes, and shifts that are happening to testers and test managers.
– Shift-Coach, where It’s more about coaching teams.
– Shift-SME, where it’s more about business savvy.
– Shift-Deliver, where it’s more about the road to production
www.onlinetestconf.com
This presentation wants to share our experience on forming an integrated Development/QA team in Perficient projects applying Scrum, and some of our best practices on securing high quality.
SOLVING MLOPS FROM FIRST PRINCIPLES, DEAN PLEBAN, DagsHubDevOpsDays Tel Aviv
One of the hardest challenges data teams face today is selecting which tools to use in their workflow. Marketing messages are vague, and you continuously hear of new buzzwords you “just have to have in your stack”. There is a constant stream of new tools, open-source and proprietary that make buyer’s remorse especially bad. I call it “MLOps Fatigue”.
This talk will not discuss a specific MLOps tool, but instead present guidelines and mental models for how to think about the problems you and your team are facing, and how to select the best tools for the task. We will review a few example problems, analyze them, and suggest Open Source solutions for them. We will provide a mental framework that will help tackle future problems you might face and extract the concrete value each tool provides.
What you’ll learn
You’ll learn what signals to watch for to notice you might have MLOps fatigue. How to define the challenge you’re facing and which questions to ask in order to build a “decision tree” for selecting the best-suited tools for the task. A few examples for using this framework in practice on challenges involving data management and automating training/pipeline tasks
About 2 years ago we faced a crisis in our DevOps consulting company - the market demand was higher than we could supply. The traditional recruiting process depending on CV and artificial credentials was not working. So we came up with an alternative solution, and since then - we are growing exponentially and diversely. In this talk we will show the practical tools we deployed in order to increase our capacity, and we will show how these tools overcome the inherited bias in the process.
Automated Testing with Logic Apps and SpecflowBizTalk360
At Integration Monday, we have had feedback from the audience that people are struggling with understanding how to do automated testing with Logic Apps. Back in the day Mike Stephenson wrote a lot of guidance about automated testing & unit testing for BizTalk. So he took up the challenge of trying to help out on this one.
In this session, we will discus some of the challenges around testing Logic Apps and then we will work through some examples of how testing can be performed and finally we will look at an approach which should put us in a solid place to be able to test Logic Apps both as an individual developer and via an automated build.
How MS Does Devops - Developer Developer Developer 2018tspascoal
This is NOT a session about MS DevOps tools. This is the story of how the VSTS team transformed from shipping an on-premise server product every couple of years, to shipping a cloud service multiple times a day. In the process, almost everything about how this team of 800 people work has changed. We had to figure out how to do agile at scale, how to transform into a microservice cloud architecture, complete restructure of teams and roles, threw out a suite of 10’s of thousands of tests and started over, went from almost 0 telemetry, to 8+TB/day and figuring out to do anything meaningful with all that data. Many mistakes were made along the way, and lessons learned that I’ll be sharing
The document discusses various aspects of automating software testing. It suggests automating the detection of flaky tests, determining the severity of test failures, converting tests to more isolated unit tests, and using usage data to determine what to test next. It emphasizes that while automation can improve testing efficiency, human oversight is still needed, and code reviews serve as the link between automated and manual processes.
Github Copilot and tools that help us code better are cool. But I’m lucky if I spend 90 minutes a day writing code. We really need to optimize the hours we spend reviewing code, updating tickets and tracing where our code is deployed. Learn how I save an hour a day streamlining non-coding tasks.
This talk is unique because 99% of developer productivity tools and hacks are about coding faster, better, smarter. And yet the vast majority of our time is spent doing all of this other stuff. After I started focusing on optimizing the 10 hours I spend every day on non-coding tasks, I found I my productivity went up and my frustration at annoying stuff went way down. I cover how to save time by reducing cognitive load and by cutting menial, non-coding tasks that we have to perform 10-50 times every day. For example:
Bug or hotfix comes through and you want to start working on it right away so you create a branch and start fixing. What you don’t do is create a Jira ticket but then later your boss/PM/CSM yells at your due to lack of visibility. I share how I automated ticket creation in Slack by correlating Github to Jira.
You have 20 minutes until your next meeting and you open a pull request and start a review. But you get pulled away half way through and when you come back the next day you forgot everything and have to start over. Huge waste of time. I share an ML job I wrote that tells me how long the review will take so I can pick PRs that fit the amount of time I have.
You build. You ship it. You own it. Great. But after I merge my code I never know where it actually is. Did the CI job fail? Is it release under feature flag? Did it just go GA to everyone? I share a bot I wrote that personally tells me where my code is in the pipeline after it leaves my hands so I can actually take full ownership without spending tons of time figuring out what code is in what release.
DevQAOps - Surviving in a DevOps WorldWinston Laoh
Talk given by Winston Laoh at the QA/LA meetup hosted at Q Los Angeles. The goal of the presentation was to inform and persuade test related engineers on how to integrate into DevOps organizations.
TDD on android. Why and How? (Coding Serbia 2019)Danny Preussler
The document discusses test-driven development (TDD) on Android. It covers:
- The history and principles of TDD, including writing failing tests first and then only producing code to pass those tests.
- How TDD works in practice using the "red-green-refactor" process of writing a failing test, passing code, then refactoring.
- Benefits of TDD like fewer bugs, easier refactoring, and faster long-term development.
- Considerations for testing Android code, such as using mockable classes and avoiding direct testing of activities/fragments.
Development is inherently collaborative. So why aren't you doing code review? This session discusses the importance of collaboration around your source code, the impact code review can have on development teams, and offers guidance on how to get started.
Atlassian Speaker: Matt Quail
Customer Speaker: Patrick Coleman of Dash
Key Takeaways:
* Peer code review explained
* Benefits and approaches to effective code review
Spec By Example or How to teach people talk to each otherAndrey Rebrov
This document introduces an approach called "Spec By Example" to improve communication between developers, QA analysts, and clients. It involves impact mapping to focus on user stories, QA and analyst pairing to create examples to describe requirements, and diverse and merge sessions for the team to collaboratively build out examples. The examples are then optimized by compressing tables and introducing parameters before being linked to automated tests through a behavior driven development approach. This unified process allows requirements, test cases, and code to have a single source of truth, makes it easy to trace work back to business needs, and improves estimation, demos, and reduces rework and issues.
Giving automated tests the love they deserve at ListingsJordi Pradel
The document summarizes the speaker's approach to automated testing at their company Listings. Their key points are:
1. They focus on short feedback loops in testing and builds to help developers identify and fix bugs quickly. Tests and builds should be fast.
2. Their "test pyramid" is non-traditional, with fast broad stack tests using in-memory implementations of side-effecting components like databases.
3. They invest in tooling to provide really good test failure feedback.
4. They only run tests affected by a code change to keep the testing process fast. They modularize their codebase with testing in mind.
Scrum teams use burn down chart to represent/track the iteration progress, and the most common burn down chart is the time-based one. But when doing that our team got some problems, it's not accurate to use time-based burn down to represent the true velocity and the feature completion. We experienced the situation that the team-velocity was pretty good, which means team could "burn" enough hours, while we didn't DELIVER as many feature comparing with the burn down. This topic is a case study based on what we did trying to resolve our problems.
Architectural Testability Workshop for Test Academy BarcelonaAsh Winter
Workshop delivered at Test Academy Barcelona on 30th January 2020. Including the Team Test for Testability, Testability Tactics, Testing Smells and the CODS Model.
Principles Before Practices: Transform Your Testing by Understanding Key Conc...TechWell
It’s one thing to be exposed to new techniques from conferences and training courses, but it’s quite another thing to apply them in real life. A major reason is that people tend to focus on learning the technique without first grasping the underlying principles. Basic testing principles, such as the pesticide paradox of software defects and defect clustering, have been known for many years. Other principles, such as “Test automation is not automatic” and “Not every software failure is a defect,” are learned by experience. Once you grasp the principle, particular techniques become more applicable and extensible. However, principles take time to learn and much practice to apply well. Randy Rice explains why true learning and application are not instant and what it takes to really absorb what we learn. Randy shows how two specific techniques—pairwise testing and risk-based testing—can be misapplied unless the key concepts are first understood. Leave knowing how to build your own set of software testing principles that can be applied in many contexts
This document discusses test automation challenges at an investment bank and lessons learned. It outlines problems with lengthy manual regression testing. An attempt was made to use Jameleon for test automation but it caused issues. They identified needs for metrics, definitions of done, and separating test connections. Recommendations include using tools like Selenium and SoapUI with a Jenkins/JIRA setup. While quick wins are possible, separating test connections and fully defining requirements are important for successful test automation.
This document discusses dependencies between agile teams and how to manage them. It defines dependencies as blockers that can kill agility. Dependencies are created both externally from other teams and internally from things like technical debt or processes. The document recommends testing your own components independently through unit testing to avoid dependencies. It also discusses different types of automated tests including unit, integration, acceptance, and exploratory tests and when each is appropriate. Continuous integration is recommended to catch issues early. Simulators and mocks can be used to reduce dependencies when promoting code to production.
Maybe developer testing is more trouble than it's worth. Teams have found writing tests to be a hinderance. Then they reduce testing in order to deliver "faster". They feel that tests actually make changing the code and fixing bugs harder. They might be right.
see also: https://www.linkedin.com/pulse/testing-cable-chain-mark-windholtz/
Agile Testing in Enterprise: Way to transform - SQA Days 2014Andrey Rebrov
This document discusses problems that can occur with traditional testing approaches and how to transition to agile testing practices. It provides two examples of organizations that struggled with long regression cycles, missed estimates, low quality and stress. The root causes are identified as document-based collaboration, lack of testing knowledge by developers, and infrastructure management chaos. Recommendations are made to use Kanban, collaborate on requirements, implement smart metrics, test automation, and a DevOps approach. Specific practices that were implemented include risk management, specification by example, test-driven development, continuous integration, configuration automation, and test automation. The results were increased delivery rates up to 5 times, zero bugs in production, no overtime, and more enjoyable work.
PuppetConf 2017: Test Driving Your Infrastructure with Jesus Alvarez & Jesse ...Puppet
What's the best part of starting on something new? You get the opportunity to take everything you've learned, and make something better. As a system grows over time, one of the biggest challenges we face is ensuring it’s as stable tomorrow as it was yesterday. As part of a move from physical data centers to AWS over the past year we were given the opportunity to bring together a group of people with varied experience writing software and designing complex infrastructures and systems to solve this problem. Our collective experiences led us to make testing and automation central to everything we did. We’ve achieved this using a variety of tools to such as Terraform, Puppet, Test Kitchen, Serverspec, Jenkins, Docker, and even some custom written libraries to fill in the gaps. There are a lot of moving pieces to track when deploying and configuring infrastructure and applications. This talk will focus on our solution. If you’d like to hear about how we’ve orchestrated standing up, and maintaining an environment, ensuring its stability by testing along the way, this talk will be for you. We will discuss the tools we’ve used, and what problems they’ve allowed us to solve as we’ve moved through this migration. We’ll discuss what has been great as we’ve progressed, and what could have gone better. We’ll talk about what other tools exist on the market, and why we made the decisions we did along the way.
We are entering a world where everything must be done quicker. You must deliver code faster. You must deploy faster. How can you deliver and deploy faster without compromising your professionalism? How can you be sure you are delivering what your client has asked you?
In short, testing is the only way to be sure you’re delivering what someone asked you to. Often we use BDD Tools such as FitNesse which gained popularity over the recent years
There are a number of integration / BDD test tools out there that help you deliver a high quality software through tests. Its easy to pick up any tool from just their tutorials and start writing tests. But as I found out the hard way, this can quickly spiral into a state where the tests are giving you and your team hell and are worth less than the value the tests are delivering.
Using FitNesse and Junit as examples, I will share things that I have learnt working on large enterprise and vendor systems and help you avoid your own path to hell.
How engineering practices help businessAndrey Rebrov
This document provides advice on how to introduce new engineering practices and technologies to a team or business. It discusses several examples of proposed new practices and technologies such as test automation, continuous integration, refactoring, and DevOps. For each, it advises how to demonstrate the benefits through examples and metrics, how to gain buy-in from various stakeholders, and pitfalls to avoid such as claiming a practice is necessary just because a famous person recommends it. The overall message is that new practices must provide clear value and be introduced through demonstration and collaboration rather than dictates.
Slides from Jesper Ottosen's 2017 Fall OnlineTestConf session – Shifting is more than shift left.
Change is happening to the testing activities. Shift-left automates and codifies the testing activities. Shift-right does it for production.
This session was about a couple of other trends, changes, and shifts that are happening to testers and test managers.
– Shift-Coach, where It’s more about coaching teams.
– Shift-SME, where it’s more about business savvy.
– Shift-Deliver, where it’s more about the road to production
www.onlinetestconf.com
This presentation wants to share our experience on forming an integrated Development/QA team in Perficient projects applying Scrum, and some of our best practices on securing high quality.
SOLVING MLOPS FROM FIRST PRINCIPLES, DEAN PLEBAN, DagsHubDevOpsDays Tel Aviv
One of the hardest challenges data teams face today is selecting which tools to use in their workflow. Marketing messages are vague, and you continuously hear of new buzzwords you “just have to have in your stack”. There is a constant stream of new tools, open-source and proprietary that make buyer’s remorse especially bad. I call it “MLOps Fatigue”.
This talk will not discuss a specific MLOps tool, but instead present guidelines and mental models for how to think about the problems you and your team are facing, and how to select the best tools for the task. We will review a few example problems, analyze them, and suggest Open Source solutions for them. We will provide a mental framework that will help tackle future problems you might face and extract the concrete value each tool provides.
What you’ll learn
You’ll learn what signals to watch for to notice you might have MLOps fatigue. How to define the challenge you’re facing and which questions to ask in order to build a “decision tree” for selecting the best-suited tools for the task. A few examples for using this framework in practice on challenges involving data management and automating training/pipeline tasks
About 2 years ago we faced a crisis in our DevOps consulting company - the market demand was higher than we could supply. The traditional recruiting process depending on CV and artificial credentials was not working. So we came up with an alternative solution, and since then - we are growing exponentially and diversely. In this talk we will show the practical tools we deployed in order to increase our capacity, and we will show how these tools overcome the inherited bias in the process.
Automated Testing with Logic Apps and SpecflowBizTalk360
At Integration Monday, we have had feedback from the audience that people are struggling with understanding how to do automated testing with Logic Apps. Back in the day Mike Stephenson wrote a lot of guidance about automated testing & unit testing for BizTalk. So he took up the challenge of trying to help out on this one.
In this session, we will discus some of the challenges around testing Logic Apps and then we will work through some examples of how testing can be performed and finally we will look at an approach which should put us in a solid place to be able to test Logic Apps both as an individual developer and via an automated build.
How MS Does Devops - Developer Developer Developer 2018tspascoal
This is NOT a session about MS DevOps tools. This is the story of how the VSTS team transformed from shipping an on-premise server product every couple of years, to shipping a cloud service multiple times a day. In the process, almost everything about how this team of 800 people work has changed. We had to figure out how to do agile at scale, how to transform into a microservice cloud architecture, complete restructure of teams and roles, threw out a suite of 10’s of thousands of tests and started over, went from almost 0 telemetry, to 8+TB/day and figuring out to do anything meaningful with all that data. Many mistakes were made along the way, and lessons learned that I’ll be sharing
The document discusses various aspects of automating software testing. It suggests automating the detection of flaky tests, determining the severity of test failures, converting tests to more isolated unit tests, and using usage data to determine what to test next. It emphasizes that while automation can improve testing efficiency, human oversight is still needed, and code reviews serve as the link between automated and manual processes.
Github Copilot and tools that help us code better are cool. But I’m lucky if I spend 90 minutes a day writing code. We really need to optimize the hours we spend reviewing code, updating tickets and tracing where our code is deployed. Learn how I save an hour a day streamlining non-coding tasks.
This talk is unique because 99% of developer productivity tools and hacks are about coding faster, better, smarter. And yet the vast majority of our time is spent doing all of this other stuff. After I started focusing on optimizing the 10 hours I spend every day on non-coding tasks, I found I my productivity went up and my frustration at annoying stuff went way down. I cover how to save time by reducing cognitive load and by cutting menial, non-coding tasks that we have to perform 10-50 times every day. For example:
Bug or hotfix comes through and you want to start working on it right away so you create a branch and start fixing. What you don’t do is create a Jira ticket but then later your boss/PM/CSM yells at your due to lack of visibility. I share how I automated ticket creation in Slack by correlating Github to Jira.
You have 20 minutes until your next meeting and you open a pull request and start a review. But you get pulled away half way through and when you come back the next day you forgot everything and have to start over. Huge waste of time. I share an ML job I wrote that tells me how long the review will take so I can pick PRs that fit the amount of time I have.
You build. You ship it. You own it. Great. But after I merge my code I never know where it actually is. Did the CI job fail? Is it release under feature flag? Did it just go GA to everyone? I share a bot I wrote that personally tells me where my code is in the pipeline after it leaves my hands so I can actually take full ownership without spending tons of time figuring out what code is in what release.
DevQAOps - Surviving in a DevOps WorldWinston Laoh
Talk given by Winston Laoh at the QA/LA meetup hosted at Q Los Angeles. The goal of the presentation was to inform and persuade test related engineers on how to integrate into DevOps organizations.
TDD on android. Why and How? (Coding Serbia 2019)Danny Preussler
The document discusses test-driven development (TDD) on Android. It covers:
- The history and principles of TDD, including writing failing tests first and then only producing code to pass those tests.
- How TDD works in practice using the "red-green-refactor" process of writing a failing test, passing code, then refactoring.
- Benefits of TDD like fewer bugs, easier refactoring, and faster long-term development.
- Considerations for testing Android code, such as using mockable classes and avoiding direct testing of activities/fragments.
Development is inherently collaborative. So why aren't you doing code review? This session discusses the importance of collaboration around your source code, the impact code review can have on development teams, and offers guidance on how to get started.
Atlassian Speaker: Matt Quail
Customer Speaker: Patrick Coleman of Dash
Key Takeaways:
* Peer code review explained
* Benefits and approaches to effective code review
Spec By Example or How to teach people talk to each otherAndrey Rebrov
This document introduces an approach called "Spec By Example" to improve communication between developers, QA analysts, and clients. It involves impact mapping to focus on user stories, QA and analyst pairing to create examples to describe requirements, and diverse and merge sessions for the team to collaboratively build out examples. The examples are then optimized by compressing tables and introducing parameters before being linked to automated tests through a behavior driven development approach. This unified process allows requirements, test cases, and code to have a single source of truth, makes it easy to trace work back to business needs, and improves estimation, demos, and reduces rework and issues.
Giving automated tests the love they deserve at ListingsJordi Pradel
The document summarizes the speaker's approach to automated testing at their company Listings. Their key points are:
1. They focus on short feedback loops in testing and builds to help developers identify and fix bugs quickly. Tests and builds should be fast.
2. Their "test pyramid" is non-traditional, with fast broad stack tests using in-memory implementations of side-effecting components like databases.
3. They invest in tooling to provide really good test failure feedback.
4. They only run tests affected by a code change to keep the testing process fast. They modularize their codebase with testing in mind.
Scrum teams use burn down chart to represent/track the iteration progress, and the most common burn down chart is the time-based one. But when doing that our team got some problems, it's not accurate to use time-based burn down to represent the true velocity and the feature completion. We experienced the situation that the team-velocity was pretty good, which means team could "burn" enough hours, while we didn't DELIVER as many feature comparing with the burn down. This topic is a case study based on what we did trying to resolve our problems.
Architectural Testability Workshop for Test Academy BarcelonaAsh Winter
Workshop delivered at Test Academy Barcelona on 30th January 2020. Including the Team Test for Testability, Testability Tactics, Testing Smells and the CODS Model.
Principles Before Practices: Transform Your Testing by Understanding Key Conc...TechWell
It’s one thing to be exposed to new techniques from conferences and training courses, but it’s quite another thing to apply them in real life. A major reason is that people tend to focus on learning the technique without first grasping the underlying principles. Basic testing principles, such as the pesticide paradox of software defects and defect clustering, have been known for many years. Other principles, such as “Test automation is not automatic” and “Not every software failure is a defect,” are learned by experience. Once you grasp the principle, particular techniques become more applicable and extensible. However, principles take time to learn and much practice to apply well. Randy Rice explains why true learning and application are not instant and what it takes to really absorb what we learn. Randy shows how two specific techniques—pairwise testing and risk-based testing—can be misapplied unless the key concepts are first understood. Leave knowing how to build your own set of software testing principles that can be applied in many contexts
This document provides tips for surviving as a tester based on stories that portray testing as easy but can turn difficult. It summarizes that:
1. Documentation is often lacking, tools are imperfect, and real-life projects introduce challenges beyond what stories suggest.
2. Risk-driven testing is recommended over artifact-driven testing to understand core functionality and risks through techniques like premortems and risk lists.
3. Using oracles like documentation, common sense, and experience can help spot problems, but oracles have more depth than commonly understood.
4. Automation should support testing by being consistent and efficient with short, deterministic scripts rather than replacing testing or focusing on end-to-end UI tests
Moving to Continuous Delivery without breaking everythingXebiaLabs
This document discusses the need for continuous delivery and integration of testing into software development pipelines. It notes that while pipelines focus on speed of delivery, testing is needed to ensure quality and avoid breaking changes. A central hub is needed to provide a single view of all test results from different sources to help determine if a release is ready. Intelligent test selection and optimization could help run relevant test subsets to improve feedback speed while maintaining coverage. An integrated test analysis tool can help address these challenges of continuous delivery.
Michael Bolton - Two Futures of Software TestingTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Two Futures of Software Testing by Michael Bolton. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Decoding the ‘Pair Testing’ in Agile ! Presented by Krishna and Rama oGuild .
Software projects, especially the product development teams, today are fast moving away from the traditional development methodology and adopting Agile for obvious advantages that it brings to the table. However, as always, advantages are accompanied by a number of challenges. The session emphasised on ‘The Power of Two’
· Practical testing challenges in Agile
· Deep dive into Agile testing technique – Pair Testing
· Pragmatic approaches & applicability of Pair Testing
· Factors for successful implementation
· Pros & Cons of Pair Testing
The document discusses holistic testing in DevOps. It emphasizes testing early in collaboration with customers to help prevent defects. It also discusses testing throughout the deployment pipeline from development to production, including testing releases and using monitoring to observe and learn. The goal is to optimize the entire process from concept to delivering value to customers through continuous testing, delivery, learning and improvement.
This document provides an overview of an Agile (Scrum) training covering core concepts like eliminating waste, amplifying learning, and deciding late. It discusses Scrum mechanics like user stories, estimating story points, and sprint ceremonies. It also covers broader Scrum concepts like releases, Scrum of Scrums, and accepting change. The training draws on the presenter's experience from multiple companies that transitioned to Agile and books on successful Agile implementation.
1. The document discusses lessons learned about agile testing and automation. It emphasizes that testing is more than just checking and that both automated and exploratory testing are important.
2. It recommends automating output checking where possible but also using exploratory testing. It also stresses the importance of unit, integration and end-to-end tests as well as code reviews.
3. The document advocates for test-driven development and notes how automated tests can reduce regression testing time. It emphasizes that successful testing requires collaboration between developers, testers and business stakeholders.
Getting Started With Selenium at ShutterstockSauce Labs
The document discusses getting started with Selenium automation at Shutterstock. It outlines Shutterstock's goals of building an enterprise-strength automation platform quickly to catch bugs earlier. Shutterstock chose Selenium due to its open source nature and partnered with an automation company to meet its goals rapidly during a period of growth. After several months, Shutterstock has developed over 600 automated test cases that run in under 2.5 hours and aims to continue expanding its automation program.
1) Kanban originated from the Toyota Production System and focuses on limiting work in progress to improve flow.
2) Kanban has two meanings - "signboard" which refers to visualizing work, and "signal card" which was used to signal production needs.
3) The Kanban method focuses on evolutionary change, respecting existing roles and encouraging leadership at all levels. It emphasizes starting to finish work and stopping the habit of starting new work.
- The document discusses the growth of the QA team at North from 1 person to 18 full-time employees and 2 co-ops over 3 years. It describes challenges faced such as hiring candidates and establishing processes as the team grew rapidly. Lessons learned include starting with basic tools, focusing on lightweight processes, and tailoring interviews and challenges to the role. The future includes expanding test automation and driving quality practices earlier in development.
The document discusses UX field research basics in three sections. Section 1 covers planning and preparation, including developing test plans, recruitment screeners, interview guides, and logistics. Section 2 discusses facilitating research through introductions, managing flow, improvisation, body language, and energy levels. Section 3 is about analyzing and reporting research findings by consolidating data, finding the overall story, and determining what story to tell from the research. The overall message is that thorough planning, proper facilitation in the field, and identifying patterns in the data are key to effective UX field research.
Four schools of testing context driven schoolHolasz Kati
This document discusses four schools of software testing: the Analytical School, Factory School, Quality Assurance School, and Context-Driven School. It provides the key beliefs, questions, and examples of each school. The Analytical School sees testing as rigorous and technical, the Factory School focuses on cost and repeatability, the Quality Assurance School emphasizes following processes. The Context-Driven School believes the value of testing depends on context and emphasizes solving problems stakeholders care about through exploratory testing and judgment. An Agile Testing School is also mentioned that uses testing to prove development is complete through automated tests.
Testing is Not a 9 to 5 Job - talk by industry executive Mike LylesApplitools
** FULL WEBINAR RECORDING: https://youtu.be/IC6ul_-PLj8 **
Find your hire power: Learn what managers look for when hiring (and firing...) testers.
Being an expert tester is no different. While the art and craft of testing and being a thinking tester is something that is built within you, simply going to work every day and being a tester is not always enough.
Each of us can become “gold medal testers” by practicing, studying, refining our skills, and building our craft.
In this webinar, we will evaluate extracurricular activities and practices that will enable you to grow from a good tester to a great tester.
Listen to this webinar, and enjoy these key takeaways:
** Inputs from testing experts on how they improve their skills
** Suggestions for online training and materials, which should be studied
** How to leverage social media to interact with the testing community
** Contributions you can make to the testing community to build your name as a leading test engineer
IndigoCube - a peek at the future of software testing by Polteq, Ruud TeunissenIndigoCube
This document discusses the future of software testing and how it has evolved over time. It begins with testing being unstructured and struggling for involvement, then progressed to defining processes and gaining recognition. Testing became more risk-based and independent. Over time, there has been a shift to agile methods, context-driven and exploratory testing, test automation, cloud computing, mobile and social aspects. The future will see even more emphasis on areas like DevOps, outsourcing, and optimizing people skills over rigid processes.
PMI-ACP: Domain 1 - Agile principles and mindset-v2.2_lite_4_84_pagesPhuocNT (Fresher.VN)
This document provides an overview of agile principles, frameworks and methodologies. It covers topics such as the Agile Manifesto, Scrum, Extreme Programming (XP), Lean, and Kanban. Key aspects of Scrum like roles, events, artifacts and practices are defined. XP's core values and practices are outlined. Lean concepts such as the seven wastes are introduced. The document also discusses strategies for adopting an Agile mindset and launching Scrum within an organization.
UX Field Research Basics, Abstractions 2019David Farkas
This document discusses UX field research basics. It covers planning and preparation, conducting research in the field, and analyzing findings. In the planning section, it describes creating documents like test plans, interview guides, and recruitment materials. For fieldwork, it discusses facilitating sessions, using improvisation techniques, and managing logistics. Finally, the analysis section explores consolidating data, identifying themes, and determining the best way to tell the research story. The overall message is that thorough planning and preparation are essential for high-quality field research.
The Drive-Thru Is Not Always Faster: Re-Thinking Your Testing Practice by Mik...Mike Lyles
Mike Lyles shares his experience traveling from North Carolina to Melbourne, Australia, which was delayed due to mechanical issues with the first plane. He then discusses challenges testing organizations face, including lack of understanding of the value of testing. Finally, he suggests the most important thing for testing teams to be successful is to understand why they do what they do at their core.
Test Driven Development (TDD) is a software development process that involves writing tests before code. The TDD cycle involves three steps: 1) writing a failing test for the next piece of functionality, 2) writing just enough code to pass that test, and 3) refactoring the new and old code. TDD provides benefits like validated systems, code coverage, enabling refactoring, and documenting behavior. It promotes writing isolated, modular unit tests and designing code in a test-driven manner. While TDD has benefits, potential pitfalls include focusing on coverage over quality, neglecting refactoring steps, and writing overly broad tests.
A coaching aid for those who want to help others achieve greater testability within their development team and wider organisation. Additionally can be used to track your own journey.
Testability can make our testing lives so much better. But we need to sell it to those who can pay for the changes needed. Find out what they need (delivery, flow, stability, resilience), how it can be measured the use the handy examples below!
Testability is Everyone's ResponsibilityAsh Winter
Testability is a first class concern for all disciplines within software development. There, I said it. No hedging, no nebulous phrasing, maybes or it depends.
Too often we labour under systems that are hard to test, manifesting themselves with frantic searches for more testers, lengthy acceptance test runs, fearful testing for regressions with a hopeful release at the end. Worst of all, it usually ends up with a project manager sat on the testers desk asking 'when will testing be done.' It's never done, it can only stop, just so you know.
Throughout my career, often the testability of a system has been deemed to be the testers concern. If something was hard to test, then it was the testers problem. However, the causes of low testability effect the activities of all disciplines, whether it be speed of feedback to developers or flow of value generating features for product managers.
During the talk, we will cover:
* How testability is a key advantage in building systems of ever increasing complexity.
* Why it's important for developers and operational stakeholders to build inherently testable systems.
* What testers can do to be catalysts for testability improvements.
The activity of testing is rarely the bottleneck, how testable your system is is your problem. Poor testability cannot be remedied by one discipline alone. It's for all of us to care about.
Testers Guide to the Illusions of Unit TestingAsh Winter
One area that testers might be able to enhance their contributions to software development teams is how we perceive and contribute to unit testing. Being able to influence this type of testing in a positive manner is a skill that testers will need to get to grips with, as more companies start to embrace a model of lone testers in cross functional teams. The shift of focus from primarily the testing that testers do, to the testing that the team does, is a key shift in thinking and behaviour.
To facilitate this shift, I believe testers busting their own illusions about this aspect of building something good would bring us much closer to developers and help us realise what other layers of testing can cover most effectively. The last point is pertinent here, as knowing and guiding unit testing brings the role of integration, acceptance and exploratory testing into sharp focus.
This is a topic that has always intrigued me, having predominantly worked as a single tester on a team for the last five or so years. I reached out to the community with the question “What do testers believe about unit testing?” and received a lot of engagement. The good users of Twitter added another 50 or so illusions that testers might have about this layer of testing. I figured that based on that level of engagement, maybe this would make an interesting talk! It wasn’t only testers who responded too, suggesting that there might be some shared illusions about unit testing that are cross disciplinary.
The list alone is interesting but now I would like to share my analysis of it with you, focusing on:
* Recurring themes within the list and how to address them as a tester or developer.
* Particular illusions to look out for with examples from my recent past.
* A guide for developers to engage with testers on unit testing, and testers with developers.
Lightning talk based on the 10 P's of Testability by Robert Meaney, talk designed by Ash Winter. Make your testing life better by embrace testability as a team.
One of my recent endeavours has been to create a "career model" for the testers within my organisation. I sat in my office at home and designed "the testing wheel". I wanted it to be simple, inclusive and offer questions but no answers.
Careers are long and winding. My own career has not been a linear progression, so building a linear model seemed wrong to me. But, this brought me into conflict with the ideas of others.
My experience report will show the highs and lows of this model after introducing it into the wild. People still tried to measure and rank with it, got mad people got when I refused to answer the questions it posed. Finally how it spread and entered battle with wider organisations career models.
A Testers Guide to the Myths, Legends and Tales of Unit TestingAsh Winter
One area that testers might be able to enhance their contributions to software development teams is how we perceive and contribute to unit testing. Being able to influence this type of testing in a positive manner is a skill that testers will need to get to grips with, as more companies start to embrace a model of lone testers in cross functional teams. The shift of focus from primarily the testing that testers do, to the testing that the team does, is a key shift in thinking and behaviour.
To facilitate this shift, I believe testers busting their own illusions about this aspect of building something good would bring us much closer to developers, and help us realise what other layers of testing can cover most effectively. The last point is pertinent here, as knowing and guiding unit testing brings the role of integration, acceptance and exploratory testing into sharp focus.
This document discusses testing infrastructure components that lie beneath applications. It defines infrastructure as the building blocks applications depend on, like hardware, virtualization, containers, and software. The author argues that testers should care about infrastructure testing because infrastructure problems are often found at the wrong level, and the product as a whole is symbiotic between applications and infrastructure. Various testing principles, examples, tactics, and tools are provided for both human/deterministic and machine/deterministic testing of infrastructure, as well as human/random and machine/random approaches.
The document discusses the importance of testability in software development. It argues that focusing on testability, rather than just features, can lead to important benefits like reduced time to start testing, lower unplanned downtimes, and increased ability to observe and understand the entire system. The document advocates for approaches like enabling faster branching to devices for testing, more automation, and greater collaboration between development and operations teams to improve testability.
Nobody /really/ likes change, its human nature. Testers have a special relationship with changing tools and techniques, they change and we tend to flounder a little and end up very nervous about our place in the new world. Continuous delivery is one such circumstance, I see and speak to many testers really struggling. However, with a significant shift in outlook and a chunk of personal development, testers can excel in environments such as these. It’s time to start to get out in front of a changing world, rather than always battling to catch up.
I want to share my experience of adding value as a tester in a continuous delivery environment, what new technologies and techniques I've learned, using your Production environment as an oracle, advocating testability and most crucially, not overestimating what our testing can achieve. Testing is not the only form of feedback, it’s time to let go of some the aspects of testing we cling to.
Continuous delivery adds richness and variety to our role as testers. To me, it is a facilitator for the autonomy and respect that testers have craved for a long time, so let’s get involved...
No one paying attention to your test strategy? Its too long. More crucially, it has no scrolls. Here's a template for the agile testing quadrants, but with scrolls for extra unforgettability.
One of my recent endeavours has been to create a "career model" for the testers within my organisation. I sat in my office at home and designed "the testing wheel". I wanted it to be simple, inclusive and offer questions but few answers.
Careers are long and winding. My own career has not been a linear progression, so building a linear model seemed wrong to me. But, this brought me into conflict with the ideas of others.
My experience report will show the highs and lows of this model after introducing it into the wild. People still tried to measure and rank with it, got mad people got when I refused to answer the questions it posed. Finally how it spread and entered battle with wider organisations career models.
This document discusses the author's experience consulting with various organizations to improve their testing capabilities. The author found that reviewing testing independently did not address root causes and that organizations often wanted quick fixes rather than systemic changes. The author eventually realized they needed to take a more holistic systems-thinking approach and focus on root causes rather than superficial solutions. They decided to focus more on systems thinking and become nomadic in their work.
The document discusses various topics related to software testing including testability, logging, environments, and feedback. It provides tips such as getting to know operations pain points, logging what matters without bloat, understanding past code intent, and focusing more on test flow and feedback rather than number of environments or testability. The document ends with inviting questions.
Pokémon GO faced several quality issues after its initial release such as app freezing, server overload causing scaling problems, inaccurate GPS, and device fragmentation affecting the ability to catch Pokémon. The document discusses lessons learned around testing and quality from these issues, including the need for a balanced testing approach with different types of testing like functionality, performance, compatibility, and usability testing. It also emphasizes that quality is multifaceted and requires continuously adding and improving features while focusing on the core idea.
This document discusses different perspectives on what testing is and provides the author's axioms about testing. The author believes that testing is a team-based activity where they help enable testing rather than doing most of the testing themselves. They view testing as a human, intellectual activity involving thinking, learning, sharing ideas. Complete testing is seen as impossible due to logical limitations and infinite possibilities, so balance and variation are important. Testing is considered a performance where the value is in applying it, not just thinking about it. Tools can assist testing but not replace it. Context is also important, as what works in one situation may not work in another.
This document provides instructions for collaboratively mindmapping an application using the online tool mindmup.com. Participants are instructed to get into groups of three, with one person creating a real-time collaborative mindmap session for their application and inviting the other two participants. They are then instructed to collaboratively map out the functions, forms, fields, views, integrations, and other aspects of the application to document its functionality and coverage for testing purposes.
This document discusses regression testing for a project to stabilize and upgrade the underlying technology of a system while maintaining normal service. It raises questions about how to test that nothing has changed when everything is changing, and whether changes to responsiveness and capacity would still be noticeable to customers. The regression testing involved risk modeling sessions with stakeholders to understand the system, exploring the system to determine what to test, and testing from the user interface down to the unit level. The results showed some issues were found and fixed, while other changes like increased speed and capacity were noticed by customers, raising questions about how to prevent customers noticing any differences after changes. It concludes that taking broad statements literally can be problematic, and that preventing all changes from being noticed may be
Coaching Model for Unrecognised Internal ModelsAsh Winter
This document proposes a coaching model to help testers recognize the testing models they already use intuitively and help them improve. The model focuses on using questioning rather than providing answers to guide testers to higher levels of thinking based on Bloom's Taxonomy. By getting testers to apply models through practice and reflection, and by iterating the model over time based on emergent needs, the coaching model aims to improve testing skills through collaborative learning.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
9. If its hard to test…
…it won’t get tested
…you the tester will test it
…your automation will be off target
…your non-functional testing will add little
value
…your application will drive your ops
people mad
…your team will have a role based wedge
FOREVER
@northern_tester Q449
10. Breaking it down…
•Reverse inspiration
•Testable Architecture
•Testability Engineering
•What YOU can do
@northern_tester Q449
11. My First Testing Role…
• 2000 bugs in 2 years
• Ticket communication
• Long test cycles
• Mastered “Quality Center”
• Winning?
@northern_tester Q449
12. A lack of testability
warped what I thought
testing was.
And its happening to
YOU
@northern_tester Q449
13. Testability is a superset
@northern_tester
• Contains
capabilities
• Hard to define
”Easy to Test”
• Co-opt
• Testable
architectures
Q449
19. #1 Each layer of your architecture
must have feedback loops
@northern_tester Q449
• Without feedback,
risk accumulates
• Tests of different
types, speeds &
cadence
20. #2 Testable systems create the
conditions for collaboration
@northern_tester Q449
• Crossing divides
• Integrated
arguments
Continuous Testing in DevOps, Dan Ashby, https://danashby.co.uk/2016/10/19/continuous-
testing-in-devops
21. #3 Architectures designed for
simplicity improve team flow
@northern_tester Q449
• Architecture
constraints manifest in
team constraints
• Complexity excludes
contribution
22. #4 Testable systems open and
expand organisational
relationships
@northern_tester Q449
• Flow of risk,
fears and claims
• Test what
matters
• When it matters
23. #5 Proactive instrumentation
exposes information that
matters
@northern_tester Q449
• Judged on finding
important problems
• Tooling helps
• Humans interpret
Monitoring in the time of Cloud Native, Cindy Sridharan,
https://medium.com/@copyconstruct/monitoring-in-the-time-of-cloud-native-
c87c7a5bfa3e
24. #6 Testing is most effective
when state can be controlled
@northern_tester Q449
•No control =
ineffective
testing
•Managing state
•Time to start
testing
25. #7 Good operability practices
are synonymous with
testability
@northern_tester Q449
• Operability vs
Features
• To operate is to
test Runbook Collaboration Template, Matthew Skelton,
http://runbookcollab.info
26. #8 Enhancing adjacent testability
improves the wider system
@northern_tester Q449
• Winning again?
• Adjacent systems
constrain us
• Systems thinking
27. What can YOU do for
YOURSELF?
@northern_tester Q449
• Learn source control deeply
• Get into operability
• Be able to draw your architecture
• Building blocks of technology
• Write some code
• Talk about what people care about
28. What can YOU do for your team?
@northern_tester Q449
Testing in Production, the safe way, Cindy Sridharan,
https://medium.com/@copyconstruct/testing-in-production-the-safe-
way-18ca102d0ef1
29. It almost always starts with a leap
@northern_tester
Show your team how
you test, you may need
to be vulnerable
Q449
35. I believe in testers as positive change
agents for whole team testing cultures.
I believe in testability to enact that
change.
I believe in all of you.
@northern_tester Q449
36. Thank you for
your attention.
https://github.com/northern-
tester/testability-engineering
@northern_tester Q449
Editor's Notes
Im not sure testing is really working out that well. We always seem to be in the middle of existential angst. Agile, DevOps, what shall we fail to embrace next? Lets change our footing.
Lets add models here. Quadrants, THSM, Pyramid etc. Then a big cross.
If its hard to test. Your strategy is for shit. Sorry.
Some people say this, but it is slightly less vague than quality is everyone’s responsibility. Describes what a bit better and who might be doing it. Still hard to aim at while building hard to test systems.
There is no bottleneck of the amount of testing, the real bottleneck is poor testability. Or to be more specific, the ability of the team to test and the testability of the system in question.
Ah ha! Now we are getting to it. We’ve heard this one right? Its a config change. A no-op release to pick up a new setting. Don’t you worry about it. ‘Just’ is one of my favourite words in testing. This actually means with all the forms of testing we currently do and I know about, we can’t imagine how to test this. Ability to test and testability meet head on. Guess what happens next.
I always enjoy this one. There is ALWAYS enough time to do any amount of testing. However, a better phrasing is the that in the time we have, we need to do the most effective testing for the risk at hand. Testability gives us this.
We’ve all worked on hard to test systems. If you think you haven’t then you are or were in such denial that you were a captive reliant on their captor. Stockholm syndrome. So, lets drop some truth bombs on our candy asses.
Simply won’t get tested - here’s a secret for you. IF IT HASN’T BEEN TESTED IT DOESN’T WORK, new stuff rarely does.
You the tester will be burdened with it. Volunteers will be hard to find.
Automation will target the areas that don’t break, or will just cover new stuff, rather than old fragile areas.
Even if you can do performance and load testing, it will be brittle, late and on inappropriate environments. Probably mislead you more than lead you. One of the key indicators of poor testability is lack of diversity within your testing.
IMPORTANT - your poor ops people. Sys Admin, DBA’s and App Support will be driven mad by your application. Hard to test means hard to operate in live, where it matters.
Finally - your team will be divided by this system that you all hate, right down the lines of role. Unchecked, this will persist FOREVER.
Reverse inspiration
Core concepts
Testability Engineering
Its about YOU
2000 bugs in 2 years
Communicated through tickets
Longs test cycles against builds on long lived environments
Mastered weirdly named tooling “Quality Centre”
Left with a weird feeling - we did tons of testing, but we never got any faster, no one got what they wanted…
I thought testing was raising bugs, communicating via tickets
Superset as in, other ililities are contained within it
Which makes it ethereal at times which is part of its problem, it his hard to describe but makes the world better.
For me this is true of a lot of aspects of testing, where we co-opt other technologies to enhance our testing. One of the things that makes testability so intuitive as a direction for the craft of testing.
But also makes it important, it is telling us to focus on the whole system, rather than making local optimisations.
Let’s make this a bit more real, by talking about 4 core ilities of testability. In no particular order though.
Observability allows us to understand the system as it actually is - we can explore and ask questions of the system
Observability determines what problems we can detect and how we evaluate if they are problems
observability tools and techniques are the lens to view and filter that information.
Tracing through a micro service architecture is a great example of this. Seeing the whole transaction throughout a set of dependent services. Great for seeing effects and side effects of a behaviour.
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.
The ability to isolate components from one another - to truly know the effects that either being connected to (or not connected to will have an impact on the component you are testing) - more importantly knowing that you can develop and test wherever you want and isolate problematic areas.
Being able to isolate problems easily speeds the development effort, moving from guessing where problems are, to isolating components and interactions, to chase problems to their origin https://martinfowler.com/bliki/CircuitBreaker.html
The more complex the system, the harder it is to test. Sounds intuitive right? The harder it is to reason around a system, how many technology types, transport mechanisms, input, outputs, dependencies it has, the more problems can occur. Lots and lots of problems means lots of time spent testing, clarifying, checking, asking, exploring, re-exploring. You get the picture.
Lets address the question of how do we make testing more meaningful for all team members, as usual we need some principles to work by, without those we are attempting to change the world from no platform. These should also be inclusive of roles, whether your focus is humans, technology or a mixture of both.
A silent system is of limited use to anyone. We should be aware that for someone, somewhere, on some device your system is a pile of unusable rubbish that meets none of their needs. You just sleep well as you don’t know about it.
Testability enables diverse testing activities, tests of differing speeds, testing when it matters most and giving actionable information. Lim ited forms of testing give you limit feedback.
Risk accumulates with slow feedback right?
Everyone needs to collaborate more. But that needs a basis, especially across roles and disciplines. Testabaility is a joint endevaour. If you have been forged in a culture where devs do dev and testers do testing. Breaking down
hen it isn’t too surprising when that doesn’t happen naturally. How can testability assist with that basis? Imagine you have a problematic dependency and two teams argue over it. Your service is always down! No, your integration is poor! This conversation enriched with feedback( from both systems is a much better one.
The same goes within team. Better testability == better feedback == better relationships between roles. We are providing critique on work, even done well it is a balance we need to strike.
This is big. I’m going to make a statement here:
Testable architectures are a leading indicator of quality. If an architecture is overly complex, has components that your team doesn’t control, you are heading for a fall.
The flow of work through your team is dependent on architecture too. Supposed bottlenecks in testing are often bottleneck in architecture. Have a single team of DBA’s/Sys Admins responsible for your database/hosts? You will have a bottleneck that manifests in testing.
Most systems have grown up over time, therefore serious diagnosis of the smells of hard to test architecture is needed. They don’t describe themselves as it.
Testability means that you are testing what matters most, when it matters most. The key here is who is involved in that statement can really help with the what and the when.
In order to truly engage with the risks and rewards feared and valued by organisations, relationships between stakeholders are needed Sounds obvious right? Often the world isn’t shaped like this though. It wouldn’t be a talk about testing without something about the dark art of communication.
To improve a system, you must learn its interactions, organizational relationships assist the flow of information between people and teams.
We must inherently accept that we cannot find all the problems right? Observability gives us this power to give our selves the best chance if finding the problems that matter.
At a previous organization, we implemented New Relic, including its mobile front end agent to allow us to see client side errors. The avalanche of javascript errors was something to behold.
Note the use of the word “helps” observability tools can see for us, but not interpret. The most common javascript error was mysterious. A stack trace is only a stack trace. The error was actually caused by losing network signal, which caused a much deeper error but the manifested error did not show that.
An explorer was needed. But combined with the tooling, the real cause was found.
Have you ever got the feeling you aren’t really in control of the system under test? You exercise it in one way and strange things happen elsewhere. When you talk about your testing without being in control of your state you are providing bad information.
We understand that when your system state is not controllable the side effects of testing cannot be observed, which compromise the effectiveness of your testing. For testing to give the most relevant and timely information, controllability is required.
Same tooling and mechanisms where sensible for test environments and production.
If your system is hard to operate then it stands to reason that it will be hard to test. To be honest, its often where the problems really lie. As opposed to testing features over and over.
I often hear this when asked to perform performance and load testing. Often systems are not operable enough to be tested in this way. They do not have logging, monitoring and tracing, unpatched servers, dodgy test environments. To operate, is to test.
Solidarity with our friends in operations will go a long way. They have suffered.
Your own testability is constrained by your dependencies. It has ever been so. I worked on the most beautiful API, acceptance test automation of my dreams, tracing, feature toggles, disposable environments. Apart from a steaming shit of a dependency where we had to raise a jira to add test data. Which would be arbitrarily deleted whenever this internal team fancied a change. Constant build failures. Ended up mocking it, then it was FUBAR in live. Bleh.
Source control – leading indicator of quality
Architecture
FFS learn some code – its one of your teams primary artifacts. How can your team get interested in testing if you aren’t interested in THE PRIMARY FUCKING COLLABORATION ARTEFACT. How much is up to you. Build a basic version of your app.
Talk about exploration/automation/perf/security/accessibility as though your team should care about it
Talk external to your team about things stakeholders care about. Ops people want not to be woken up, Product people want to validate changes.
STOP FUCKING TESTING EVERTHING – hold your nerve. Leave it in the column for a day or two. See who blinks.
Show them the world of testing that is possible.
The more testable, the ways become open.
Everything starts with a leap from someone, it may as well be you. The simplest starting point is to show someone who may be able to help, how you test.
Has the ability to observe and system has the characteristic of observability
Has the ability to observe and system has the characteristic of observability
Being an information provider is usually a position from which we defend ourselves against the largesse of organisations. How can we switch footing from defending to reaching out is the challenge, we need a new
You know who else was a victim of this. Our friends in Ops
I believe in testing as a force for focusing on value and risk. For me, this includes less fucking around with features and focusing where it really matters.
I believe that testability can enable our teams to share this focus.
I believe that it starts with us. Right here, today. All of us. You can do it.
You will find the principles of testability engineering here