Testability is a first class concern for all disciplines within software development. There, I said it. No hedging, no nebulous phrasing, maybes or it depends.
Too often we labour under systems that are hard to test, manifesting themselves with frantic searches for more testers, lengthy acceptance test runs, fearful testing for regressions with a hopeful release at the end. Worst of all, it usually ends up with a project manager sat on the testers desk asking 'when will testing be done.' It's never done, it can only stop, just so you know.
Throughout my career, often the testability of a system has been deemed to be the testers concern. If something was hard to test, then it was the testers problem. However, the causes of low testability effect the activities of all disciplines, whether it be speed of feedback to developers or flow of value generating features for product managers.
During the talk, we will cover:
* How testability is a key advantage in building systems of ever increasing complexity.
* Why it's important for developers and operational stakeholders to build inherently testable systems.
* What testers can do to be catalysts for testability improvements.
The activity of testing is rarely the bottleneck, how testable your system is is your problem. Poor testability cannot be remedied by one discipline alone. It's for all of us to care about.
Architectural Testability Workshop for Test Academy BarcelonaAsh Winter
Workshop delivered at Test Academy Barcelona on 30th January 2020. Including the Team Test for Testability, Testability Tactics, Testing Smells and the CODS Model.
Testability can make our testing lives so much better. But we need to sell it to those who can pay for the changes needed. Find out what they need (delivery, flow, stability, resilience), how it can be measured the use the handy examples below!
The document discusses the importance of testability in software development. It argues that focusing on testability, rather than just features, can lead to important benefits like reduced time to start testing, lower unplanned downtimes, and increased ability to observe and understand the entire system. The document advocates for approaches like enabling faster branching to devices for testing, more automation, and greater collaboration between development and operations teams to improve testability.
This is a call to action. On our cross functional teams and during our devops transformations we talk about how testing is for the whole team. Quality is everyone's responsibility. How much are we really doing to make this happen? Often we are working on systems that are hard to test for many reasons, but if we simply do more testing, write more automation we are neglecting what should be our main mission, advocating for increasing levels of testability, to truly get everyone involved in testing. We all have stories about how something is difficult to test, often never being tested or certainly left with the tester to figure it out. It doesn't have to be this way.
During my talk, I want to introduce a set of principles for testability engineering. A new way to approach our work as testers. These principles will tackle how we make our systems more observable, controllable, how we share knowledge across teams and improve the testability of our dependencies. I believe it is time to create a new focus on testability, as it affects everything we do, what our teams do and beyond into how value is delivered for customers.
I want you to take away from the talk:
* Why a focus on testability can multiply your effectiveness as a tester
* What the principles of testability engineering are and how to advocate for them
* How you can make iterative changes to what you do in order to embrace testability
New technology and complexity is rendering many software development techniques and paradigms obsolete at an increasing rate. We already exist in a space where an infinite number of tests of an array of different types can be performed. A new mission is needed, one that leverages the varied talents of all kinds of testers and culminates in a new focus on the exponential benefits that testability brings.
#ATAGTR2021 Presentation : "Chaos engineering: Break it to make it" by Anupa...Agile Testing Alliance
Interactive Session on "Chaos engineering: Break it to make it" by Anupam Agarwal,Nagarro, Peeyush Girdhar, Cloud / DevOps Nagarro. at #ATAGTR2021.
#ATAGTR2021 was the 6th Edition of Global Testing Retreat.
The video recording of the session is now available on the following link: https://www.youtube.com/watch?v=4bM4f8xNp2A
To know more about #ATAGTR2021, please visit:https://gtr.agiletestingalliance.org/
This document summarizes a case study of developing the Entaggle.com website using agile practices like test-driven development and continuous integration/deployment. Key aspects included frequent code check-ins and automated testing at the unit and acceptance levels. The development process focused on removing friction through actions like regular refactoring and making deployments to staging and production easy. This approach allowed over 90 user stories to be completed with few bugs. Lessons learned reinforced test-driven development and the importance of splitting stories into pieces that could be completed in 2 days or less.
From monitoring to automated testing, Jesse Reynolds, PuppetPuppet
Don't have time to write automated tests for your infrastructure code? Don't see the point? Or don't know where to start? This talk is for you.
Now we're writing code to manage our infrastructure with tools like Puppet we are effectively developing software. One of the wonderful aspects to this is that we have the world of software development quality best practices to draw on in order to achieve a high rate of change while not compromising on reliability. Writing tests for infrastructure code, and having them execute automatically as part of a continuous integration pipeline, is a key element to this and is the focus for this talk.
But how do you get started on this? What are some tools to help? How should we think about this problem? This talk will provide an overview of the different types of tests that can be written, from small unit tests to integration and acceptance testing. It will focus on integration testing where existing monitoring checks can provide a useful starting point.
After this talk attendees will:
* better understand the value of automated tests for infrastructure code
* be motivated to write tests and implement CI pipelines to execute these tests automatically
* know how to get started with some suggested tooling
This is testing trends post, you got all testing related post , latest interview related post. All testing related material. Click the below link to find the my site, where you got all manual and automation testing related post.
Architectural Testability Workshop for Test Academy BarcelonaAsh Winter
Workshop delivered at Test Academy Barcelona on 30th January 2020. Including the Team Test for Testability, Testability Tactics, Testing Smells and the CODS Model.
Testability can make our testing lives so much better. But we need to sell it to those who can pay for the changes needed. Find out what they need (delivery, flow, stability, resilience), how it can be measured the use the handy examples below!
The document discusses the importance of testability in software development. It argues that focusing on testability, rather than just features, can lead to important benefits like reduced time to start testing, lower unplanned downtimes, and increased ability to observe and understand the entire system. The document advocates for approaches like enabling faster branching to devices for testing, more automation, and greater collaboration between development and operations teams to improve testability.
This is a call to action. On our cross functional teams and during our devops transformations we talk about how testing is for the whole team. Quality is everyone's responsibility. How much are we really doing to make this happen? Often we are working on systems that are hard to test for many reasons, but if we simply do more testing, write more automation we are neglecting what should be our main mission, advocating for increasing levels of testability, to truly get everyone involved in testing. We all have stories about how something is difficult to test, often never being tested or certainly left with the tester to figure it out. It doesn't have to be this way.
During my talk, I want to introduce a set of principles for testability engineering. A new way to approach our work as testers. These principles will tackle how we make our systems more observable, controllable, how we share knowledge across teams and improve the testability of our dependencies. I believe it is time to create a new focus on testability, as it affects everything we do, what our teams do and beyond into how value is delivered for customers.
I want you to take away from the talk:
* Why a focus on testability can multiply your effectiveness as a tester
* What the principles of testability engineering are and how to advocate for them
* How you can make iterative changes to what you do in order to embrace testability
New technology and complexity is rendering many software development techniques and paradigms obsolete at an increasing rate. We already exist in a space where an infinite number of tests of an array of different types can be performed. A new mission is needed, one that leverages the varied talents of all kinds of testers and culminates in a new focus on the exponential benefits that testability brings.
#ATAGTR2021 Presentation : "Chaos engineering: Break it to make it" by Anupa...Agile Testing Alliance
Interactive Session on "Chaos engineering: Break it to make it" by Anupam Agarwal,Nagarro, Peeyush Girdhar, Cloud / DevOps Nagarro. at #ATAGTR2021.
#ATAGTR2021 was the 6th Edition of Global Testing Retreat.
The video recording of the session is now available on the following link: https://www.youtube.com/watch?v=4bM4f8xNp2A
To know more about #ATAGTR2021, please visit:https://gtr.agiletestingalliance.org/
This document summarizes a case study of developing the Entaggle.com website using agile practices like test-driven development and continuous integration/deployment. Key aspects included frequent code check-ins and automated testing at the unit and acceptance levels. The development process focused on removing friction through actions like regular refactoring and making deployments to staging and production easy. This approach allowed over 90 user stories to be completed with few bugs. Lessons learned reinforced test-driven development and the importance of splitting stories into pieces that could be completed in 2 days or less.
From monitoring to automated testing, Jesse Reynolds, PuppetPuppet
Don't have time to write automated tests for your infrastructure code? Don't see the point? Or don't know where to start? This talk is for you.
Now we're writing code to manage our infrastructure with tools like Puppet we are effectively developing software. One of the wonderful aspects to this is that we have the world of software development quality best practices to draw on in order to achieve a high rate of change while not compromising on reliability. Writing tests for infrastructure code, and having them execute automatically as part of a continuous integration pipeline, is a key element to this and is the focus for this talk.
But how do you get started on this? What are some tools to help? How should we think about this problem? This talk will provide an overview of the different types of tests that can be written, from small unit tests to integration and acceptance testing. It will focus on integration testing where existing monitoring checks can provide a useful starting point.
After this talk attendees will:
* better understand the value of automated tests for infrastructure code
* be motivated to write tests and implement CI pipelines to execute these tests automatically
* know how to get started with some suggested tooling
This is testing trends post, you got all testing related post , latest interview related post. All testing related material. Click the below link to find the my site, where you got all manual and automation testing related post.
DevOps Security Coffee - Lazy hackers who think out of the box, but stay in t...Freek Kauffmann
How to create a constructive force field between DevOps engineers and hackers?
NOTE: Slide 4 ('Vision on IT Security') has been altered in hindsight.
For questions, please contact me directly: +316 457 61 857
1. The document discusses quality processes at Google, including that testing involves many developers and one tester.
2. It describes different types of tests - unit, integration, and system - and how they are used to validate code quality and product quality at different stages.
3. Key aspects of Google's quality process include classifying tests by size, enforcing time limits, ensuring tests are independent and have no side effects, and using test results to guide continuous integration.
Cross functional peer review preso 10-01-2013SmartBear
This document discusses how cross-functional peer review of requirements, design, code, and test plans within agile teams can improve software quality. It presents findings from a case study showing that reviewing 200-500 lines of code per hour while reviewing no more than 400 lines at a time yields the lowest defect density. Industry metrics also show that high-performing teams find fewer defects post-release by extensively testing in development. Benefits of cross-functional peer review include improved communication, faster problem identification, more readable code, and increased sharing of best practices across roles within development teams.
With Agile adoption many things have changed in quality assurance and tester role. Ourdays the whole team is responsible for product quality. But not so many people understand how such high level approaches work in practice, how developer interacts with tester, what stages each task passes on the way from requirements specification to customer acceptance, who is doing what at each stage.
I have met only few teams, where developer and tester work closely together on a daily basis. Some projects try to same money on developer's time, others try to have independent testing team without influence from developers side. Developers also don't understad how tester could help them in practice. But this pair is able to significantly improve product quality and avoid many common issues.
In this talk we will cover motivation behind pair work of develoeper and tester, concrete practices and approaches at different stages, and advantages that both sides could achieve from such work style.
The document outlines an upcoming programming workshop that will cover various JetBrains IDEs like PyCharm, IntelliJ IDEA, and PhpStorm. It then discusses Test Driven Development (TDD), including what TDD is, the development cycle used in TDD, and benefits like encouraging simple designs and confidence. Different types of software tests are also listed like unit tests, integration tests, acceptance tests, and others. Specific testing techniques like unit testing, integration testing using bottom-up and top-down approaches, and acceptance testing are then explained at a high level. Finally, some important notes on testing like trusting tests and prioritizing maintainability are provided.
Test-Driven Development (TDD) is a software development process that relies on writing automated tests before developing code to pass those tests ("red-green-refactor"). TDD promotes writing isolated, repeatable unit tests and decoupling code from external dependencies through techniques like dependency injection and mocking. While TDD has benefits like ensuring quality and preventing regressions, it has also received criticism for potentially leading to over-testing and an over-reliance on mocking. The presentation concludes with an open discussion of experiences and opinions on TDD.
On Rapid Releases and Software TestingFoutse Khomh
This document discusses the impacts of moving to a rapid release cycle from a traditional release cycle based on data from Firefox releases and an interview with a Mozilla QA engineer. Some key findings are that rapid releases correlate with fewer unique test cases executed, testers involved, and operating systems and locales tested compared to traditional releases. However, test executions increased when controlling for other factors like project evolution. To cope, rapid release cycles require narrowing the scope of testing to focus resources on high risk areas rather than full legacy support.
Systems Thinker, Developer, Efficient, Tester (SDET) role is not limited to unit testing & test automation, it is a mindset to approach testing in an agile environment. Testing is contextual and methods & tools we adopt to perform testing is to add value to the product used by the consumers or the enterprise.
What is the difference between manual testing and automation testingEr Mahednra Chauhan
Manual testing involves human testers executing test cases, while automation testing uses automation tools to run test cases. Manual testing is time-consuming and relies on human resources, whereas automated testing is significantly faster. While manual testing requires investment in human resources, automation testing requires investment in testing tools and automation engineers who have programming knowledge.
Unit Testing, TDD and the Walking SkeletonSeb Rose
The document discusses unit testing, test-driven development (TDD), and the walking skeleton approach. It provides an overview of these software development practices, including writing automated tests before code, using the tests to drive code development, and starting projects with an initial architecture or "walking skeleton" that is automatically testable, buildable, and deployable. The document aims to dispel common myths about testing and convince readers of the value of these practices.
We are entering a world where everything must be done quicker. You must deliver code faster. You must deploy faster. How can you deliver and deploy faster without compromising your professionalism? How can you be sure you are delivering what your client has asked you?
In short, testing is the only way to be sure you’re delivering what someone asked you to. Often we use BDD Tools such as FitNesse which gained popularity over the recent years
There are a number of integration / BDD test tools out there that help you deliver a high quality software through tests. Its easy to pick up any tool from just their tutorials and start writing tests. But as I found out the hard way, this can quickly spiral into a state where the tests are giving you and your team hell and are worth less than the value the tests are delivering.
Using FitNesse and Junit as examples, I will share things that I have learnt working on large enterprise and vendor systems and help you avoid your own path to hell.
Most frequently we are using words “testing” and “tester” when talk about product quality. But does testing or tester role affect quality? The eternal struggle between QC and QA… Yes, I’m almost sure you understand this, but why nothing is changed in most of teams? Because we need mind shift in our heads and more global changes in QA processes. Who QA engineers are and what are their responsibilities, activities, duties in modern development world? What options do they have to affect product quality and improve it if developers are responsible for product development? In this talk I will try to find detailed practical answers to all these questions. Let’s change development world together!
The document discusses challenges faced by companies with both in-house and outsourced software testing. It introduces predictive analytics as a solution to address common challenges like managing multiple releases and tools, measuring productivity, and generating customized reports. Predictive analytics uses models to analyze test data and predict issues, risks, delays and determine how to optimize testing. Integrating predictive analytics into a testing framework can help reduce costs, improve quality and make better decisions.
Gamification in outsourcing company: experience report.Mikalai Alimenkou
Most of us used to hear word gamification only for end user engagement into product usage. Some of us know about usage of similar approaches in product development teams to improve and tune development process. But almost nobody believes that gamification is possible in the context of outsourcing companies and teams. This talk is experience report of gamification usage on very large project with detailed reusable framework demonstration. If you want to bring some fun and really engage your team, then this talk is for you.
Writing acceptable patches: an empirical study of open source project patchesYida Tao
This document analyzes reasons why open source software project patches are rejected. The researchers extracted 300 rejected patches from Eclipse and Mozilla projects and identified 12 common rejection reasons through manual inspection. They then surveyed 246 developers about which reasons were most decisive and difficult to judge. Key findings were that compilation errors and tests failures were most decisive, while inconsistent documentation was highly unacceptable. Patch reviewers tended to have stricter criteria than writers. Overall, 30 rejection reasons across 5 categories were identified to help developers write more acceptable patches.
Shift-Left Testing: QA in a DevOps World by David LaulusaQA or the Highway
Shift-left testing involves injecting quality earlier in the software development process through techniques like unit testing, test-driven development, and regression testing. The presentation discusses principles for effective testing including equivalence partitioning, boundary value analysis, and combinatorial testing. It emphasizes the importance of collaboration between testers and developers through practices like dependency injection and test automation.
The document discusses Test-Driven Development (TDD). Some key points:
- TDD involves writing automated tests before writing code to ensure tests fail initially and then passing the tests by writing just enough code. This prevents writing extra code and helps design code structure.
- TDD provides benefits like confidence in code quality, catching errors early, and giving feedback on changes. Unit tests should initially fail, test individual components, and not be for finding bugs which is done through manual testing.
- The basic TDD process is to create a failing test, write just enough code to pass the test, refactor code, and repeat the process for each new feature or change. This helps integrate TDD into
DevOpsDays Houston 2019 -Kevin Crawley - Practical Guide to Not Building Anot...DevOpsDays Houston
I’ll discuss how my experience of approaching DevOps not as another siloed effort but instead as a discipline by embedding engineers within cross-functional teams who are dedicated to continuously improving the quality of automation across the entire SDLC.
The document argues that unit test scopes should be larger rather than having many small unit test classes. Larger unit tests are better because they are less tightly coupled to code structure, can be more easily mapped to business requirements, and test interactions between classes rather than just individual classes. The document provides suggestions for how to structure unit tests at a bigger scope, such as using dependency injection frameworks, test data builders, and splitting tests across multiple classes when there are too many assertions. The overall message is that test scopes should be large enough to cover layers, external libraries, and controllers when possible rather than focusing exclusively on small individual classes.
Using MLOps to Bring ML to Production/The Promise of MLOpsWeaveworks
In this final Weave Online User Group of 2019, David Aronchick asks: have you ever struggled with having different environments to build, train and serve ML models, and how to orchestrate between them? While DevOps and GitOps have made huge traction in recent years, many customers struggle to apply these practices to ML workloads. This talk will focus on the ways MLOps has helped to effectively infuse AI into production-grade applications through establishing practices around model reproducibility, validation, versioning/tracking, and safe/compliant deployment. We will also talk about the direction for MLOps as an industry, and how we can use it to move faster, with more stability, than ever before.
The recording of this session is on our YouTube Channel here: https://youtu.be/twsxcwgB0ZQ
Speaker: David Aronchick, Head of Open Source ML Strategy, Microsoft
Bio: David leads Open Source Machine Learning Strategy at Azure. This means he spends most of his time helping humans to convince machines to be smarter. He is only moderately successful at this. Previously, David led product management for Kubernetes at Google, launched GKE, and co-founded the Kubeflow project. David has also worked at Microsoft, Amazon and Chef and co-founded three startups.
Sign up for a free Machine Learning Ops Workshop: http://bit.ly/MLOps_Workshop_List
Weaveworks will cover concepts such as GitOps (operations by pull request), Progressive Delivery (canary, A/B, blue-green), and how to apply those approaches to your machine learning operations to mitigate risk.
DevOps Security Coffee - Lazy hackers who think out of the box, but stay in t...Freek Kauffmann
How to create a constructive force field between DevOps engineers and hackers?
NOTE: Slide 4 ('Vision on IT Security') has been altered in hindsight.
For questions, please contact me directly: +316 457 61 857
1. The document discusses quality processes at Google, including that testing involves many developers and one tester.
2. It describes different types of tests - unit, integration, and system - and how they are used to validate code quality and product quality at different stages.
3. Key aspects of Google's quality process include classifying tests by size, enforcing time limits, ensuring tests are independent and have no side effects, and using test results to guide continuous integration.
Cross functional peer review preso 10-01-2013SmartBear
This document discusses how cross-functional peer review of requirements, design, code, and test plans within agile teams can improve software quality. It presents findings from a case study showing that reviewing 200-500 lines of code per hour while reviewing no more than 400 lines at a time yields the lowest defect density. Industry metrics also show that high-performing teams find fewer defects post-release by extensively testing in development. Benefits of cross-functional peer review include improved communication, faster problem identification, more readable code, and increased sharing of best practices across roles within development teams.
With Agile adoption many things have changed in quality assurance and tester role. Ourdays the whole team is responsible for product quality. But not so many people understand how such high level approaches work in practice, how developer interacts with tester, what stages each task passes on the way from requirements specification to customer acceptance, who is doing what at each stage.
I have met only few teams, where developer and tester work closely together on a daily basis. Some projects try to same money on developer's time, others try to have independent testing team without influence from developers side. Developers also don't understad how tester could help them in practice. But this pair is able to significantly improve product quality and avoid many common issues.
In this talk we will cover motivation behind pair work of develoeper and tester, concrete practices and approaches at different stages, and advantages that both sides could achieve from such work style.
The document outlines an upcoming programming workshop that will cover various JetBrains IDEs like PyCharm, IntelliJ IDEA, and PhpStorm. It then discusses Test Driven Development (TDD), including what TDD is, the development cycle used in TDD, and benefits like encouraging simple designs and confidence. Different types of software tests are also listed like unit tests, integration tests, acceptance tests, and others. Specific testing techniques like unit testing, integration testing using bottom-up and top-down approaches, and acceptance testing are then explained at a high level. Finally, some important notes on testing like trusting tests and prioritizing maintainability are provided.
Test-Driven Development (TDD) is a software development process that relies on writing automated tests before developing code to pass those tests ("red-green-refactor"). TDD promotes writing isolated, repeatable unit tests and decoupling code from external dependencies through techniques like dependency injection and mocking. While TDD has benefits like ensuring quality and preventing regressions, it has also received criticism for potentially leading to over-testing and an over-reliance on mocking. The presentation concludes with an open discussion of experiences and opinions on TDD.
On Rapid Releases and Software TestingFoutse Khomh
This document discusses the impacts of moving to a rapid release cycle from a traditional release cycle based on data from Firefox releases and an interview with a Mozilla QA engineer. Some key findings are that rapid releases correlate with fewer unique test cases executed, testers involved, and operating systems and locales tested compared to traditional releases. However, test executions increased when controlling for other factors like project evolution. To cope, rapid release cycles require narrowing the scope of testing to focus resources on high risk areas rather than full legacy support.
Systems Thinker, Developer, Efficient, Tester (SDET) role is not limited to unit testing & test automation, it is a mindset to approach testing in an agile environment. Testing is contextual and methods & tools we adopt to perform testing is to add value to the product used by the consumers or the enterprise.
What is the difference between manual testing and automation testingEr Mahednra Chauhan
Manual testing involves human testers executing test cases, while automation testing uses automation tools to run test cases. Manual testing is time-consuming and relies on human resources, whereas automated testing is significantly faster. While manual testing requires investment in human resources, automation testing requires investment in testing tools and automation engineers who have programming knowledge.
Unit Testing, TDD and the Walking SkeletonSeb Rose
The document discusses unit testing, test-driven development (TDD), and the walking skeleton approach. It provides an overview of these software development practices, including writing automated tests before code, using the tests to drive code development, and starting projects with an initial architecture or "walking skeleton" that is automatically testable, buildable, and deployable. The document aims to dispel common myths about testing and convince readers of the value of these practices.
We are entering a world where everything must be done quicker. You must deliver code faster. You must deploy faster. How can you deliver and deploy faster without compromising your professionalism? How can you be sure you are delivering what your client has asked you?
In short, testing is the only way to be sure you’re delivering what someone asked you to. Often we use BDD Tools such as FitNesse which gained popularity over the recent years
There are a number of integration / BDD test tools out there that help you deliver a high quality software through tests. Its easy to pick up any tool from just their tutorials and start writing tests. But as I found out the hard way, this can quickly spiral into a state where the tests are giving you and your team hell and are worth less than the value the tests are delivering.
Using FitNesse and Junit as examples, I will share things that I have learnt working on large enterprise and vendor systems and help you avoid your own path to hell.
Most frequently we are using words “testing” and “tester” when talk about product quality. But does testing or tester role affect quality? The eternal struggle between QC and QA… Yes, I’m almost sure you understand this, but why nothing is changed in most of teams? Because we need mind shift in our heads and more global changes in QA processes. Who QA engineers are and what are their responsibilities, activities, duties in modern development world? What options do they have to affect product quality and improve it if developers are responsible for product development? In this talk I will try to find detailed practical answers to all these questions. Let’s change development world together!
The document discusses challenges faced by companies with both in-house and outsourced software testing. It introduces predictive analytics as a solution to address common challenges like managing multiple releases and tools, measuring productivity, and generating customized reports. Predictive analytics uses models to analyze test data and predict issues, risks, delays and determine how to optimize testing. Integrating predictive analytics into a testing framework can help reduce costs, improve quality and make better decisions.
Gamification in outsourcing company: experience report.Mikalai Alimenkou
Most of us used to hear word gamification only for end user engagement into product usage. Some of us know about usage of similar approaches in product development teams to improve and tune development process. But almost nobody believes that gamification is possible in the context of outsourcing companies and teams. This talk is experience report of gamification usage on very large project with detailed reusable framework demonstration. If you want to bring some fun and really engage your team, then this talk is for you.
Writing acceptable patches: an empirical study of open source project patchesYida Tao
This document analyzes reasons why open source software project patches are rejected. The researchers extracted 300 rejected patches from Eclipse and Mozilla projects and identified 12 common rejection reasons through manual inspection. They then surveyed 246 developers about which reasons were most decisive and difficult to judge. Key findings were that compilation errors and tests failures were most decisive, while inconsistent documentation was highly unacceptable. Patch reviewers tended to have stricter criteria than writers. Overall, 30 rejection reasons across 5 categories were identified to help developers write more acceptable patches.
Shift-Left Testing: QA in a DevOps World by David LaulusaQA or the Highway
Shift-left testing involves injecting quality earlier in the software development process through techniques like unit testing, test-driven development, and regression testing. The presentation discusses principles for effective testing including equivalence partitioning, boundary value analysis, and combinatorial testing. It emphasizes the importance of collaboration between testers and developers through practices like dependency injection and test automation.
The document discusses Test-Driven Development (TDD). Some key points:
- TDD involves writing automated tests before writing code to ensure tests fail initially and then passing the tests by writing just enough code. This prevents writing extra code and helps design code structure.
- TDD provides benefits like confidence in code quality, catching errors early, and giving feedback on changes. Unit tests should initially fail, test individual components, and not be for finding bugs which is done through manual testing.
- The basic TDD process is to create a failing test, write just enough code to pass the test, refactor code, and repeat the process for each new feature or change. This helps integrate TDD into
DevOpsDays Houston 2019 -Kevin Crawley - Practical Guide to Not Building Anot...DevOpsDays Houston
I’ll discuss how my experience of approaching DevOps not as another siloed effort but instead as a discipline by embedding engineers within cross-functional teams who are dedicated to continuously improving the quality of automation across the entire SDLC.
The document argues that unit test scopes should be larger rather than having many small unit test classes. Larger unit tests are better because they are less tightly coupled to code structure, can be more easily mapped to business requirements, and test interactions between classes rather than just individual classes. The document provides suggestions for how to structure unit tests at a bigger scope, such as using dependency injection frameworks, test data builders, and splitting tests across multiple classes when there are too many assertions. The overall message is that test scopes should be large enough to cover layers, external libraries, and controllers when possible rather than focusing exclusively on small individual classes.
Using MLOps to Bring ML to Production/The Promise of MLOpsWeaveworks
In this final Weave Online User Group of 2019, David Aronchick asks: have you ever struggled with having different environments to build, train and serve ML models, and how to orchestrate between them? While DevOps and GitOps have made huge traction in recent years, many customers struggle to apply these practices to ML workloads. This talk will focus on the ways MLOps has helped to effectively infuse AI into production-grade applications through establishing practices around model reproducibility, validation, versioning/tracking, and safe/compliant deployment. We will also talk about the direction for MLOps as an industry, and how we can use it to move faster, with more stability, than ever before.
The recording of this session is on our YouTube Channel here: https://youtu.be/twsxcwgB0ZQ
Speaker: David Aronchick, Head of Open Source ML Strategy, Microsoft
Bio: David leads Open Source Machine Learning Strategy at Azure. This means he spends most of his time helping humans to convince machines to be smarter. He is only moderately successful at this. Previously, David led product management for Kubernetes at Google, launched GKE, and co-founded the Kubeflow project. David has also worked at Microsoft, Amazon and Chef and co-founded three startups.
Sign up for a free Machine Learning Ops Workshop: http://bit.ly/MLOps_Workshop_List
Weaveworks will cover concepts such as GitOps (operations by pull request), Progressive Delivery (canary, A/B, blue-green), and how to apply those approaches to your machine learning operations to mitigate risk.
How we integrate Machine Learning Algorithms into our IT Platform at OutfitteryOUTFITTERY
1) Over time, a product manager, data scientist, and software engineer each contributed to building a machine learning platform at a company to improve customer experiences and business processes.
2) They created a "Smart Gateway" system to run algorithms and experiments, but it had limitations around data consistency and scalability.
3) A new IT team took over the system and enhanced it with real-time data access, production readiness using Docker and Kubernetes, and a configuration database.
4) The system evolved to become a general platform for running "pure functions" to power applications beyond just experiments, like stylist recommendations and return form processing.
Netflix has built a highly available architecture using microservices running across AWS availability zones. They induce failures through "chaos monkeys" like Chaos Monkey and Latency Monkey to test resiliency. This validated that their designs worked as intended and helped them identify issues. Netflix has now open sourced many of their cloud tools and libraries through projects like Hystrix and Eureka.
Tips for Writing Better Charters for Exploratory Testing Sessions by Michael...TEST Huddle
We will look at some common pitfalls encountered when chartering your testing for session-based exploratory testing. After a brief overview of the session-based test management process we will jump into specific practices and techniques to help you and the rest of your team achieve better coverage and find better bugs. A presentation for the EuroSTAR Software Testing Community from September 2012.
The document discusses system and solution testing. It provides an example of how unit tests that pass can fail during system testing. It defines system testing as testing at a product level to find bugs not discoverable through feature testing. Solution testing is defined as customer-oriented end-to-end application testing. The document outlines some key differences between feature, system, and solution testing and discusses common bugs found through system testing.
Beyond TDD: Enabling Your Team to Continuously Deliver SoftwareChris Weldon
Many project teams have adopted unit testing as a necessary step in their development process. Many more use a test-first approach to keep their code lean. Yet, far too often these teams still suffer from many of the same impediments: recurrent integration failures with other enterprise projects, slow feedback with the customer, and sluggish release cycles. With a languishing feedback loop, the enterprise continues to put increasing pressure on development teams to deliver. How does an aspiring agile team improve to meet the demands of the enterprise?
Continuous integration is the next logical step for the team. In this talk, you’ll learn how continuous integration solves intra and inter-project integration issues without manual overhead, the value added by continuous integration, and how to leverage tools and processes to further improve the quality of your code. Finally, we discuss the gold standard of agile teams: continuous deployment. You’ll learn how continuous deployment helps close the feedback loop with your customers, increases visibility for your team, and standardizes the deployment process.
Slides from my talk at QCon New York on how Netflix increases resiliency through failure, covering the Chaos Monkey, Chaos Gorilla, Latency Monkey, and others from the Simian Army.
JUC Europe 2015: How to Optimize Automated Testing with Everyone's Favorite B...CloudBees
By Viktor Clerc, XebiaLabs
If you are taking the quality of your software seriously, you'll have numerous automated tests across many different Jenkins jobs. But getting a grip on all of your automated tests -- and then figuring out whether your software is good enough to go live -- becomes harder and harder as you speed up software delivery. Viktor will share tips on how naming conventions, partitioning of testware and mirroring the application's structure in the test code help you best handle automated testing with Jenkins. Viktor will also provide insight into how to keep this setup manageable and will share practical experiences of managing a large portfolio of automated tests. Finally, he will showcase several practices that help you manage all your results, plus add aggregation, trend analysis and qualification capabilities to your Jenkins setup. These practices will help you draw the right conclusions from your tests and deliver code faster, with the confidence that your systems won't fail in production.
The document discusses how the entropy of Ruby codebases increases over time if changes are not limited, making future changes more difficult. It advocates for writing specs to establish confidence in code and observing trends in metrics like code coverage, complexity, and churn to catch signs of rising entropy early. Sticking to conventions but knowing when to deviate, and focusing on principles over mechanics can help limit a codebase's entropy.
DefCore: The Interoperability Standard for OpenStackMark Voelker
This presentation provides an introduction to the OpenStack DefCore Committee, which is working to create interoperability standards for OpenStack Powered clouds. You'll gain insight into the interoperability challenges of OpenStack clouds, and learn how DefCore creates it's Guidelines. Learn why the Technical Committee, Board of Directors, end users, and vendors have a seat at the table. You'll laugh, you'll cry, you'll immediately want to stop talking about cloud computing and go watch science fiction all night.
This talk was originally presented at the Triangle OpenStack Meetup Group's September 21, 2015 meeting in Durham, NC. A recording can be found here (this talk starts at the 46:10 mark): https://vmware.webex.com/vmware/lsr.php?RCID=a51f9e6882f54ccab8b715c8c0162484
A new revision with updates was given at a meeting of the China Open Source Cloud League on May 20, 2016 in Beijing. The slides here on Slideshare represent that presentation.
The document summarizes the OpenStack Interoperability Working Group's efforts to promote interoperability across OpenStack distributions and products. It discusses how the group develops guidelines specifying required capabilities and tests. Products must pass these tests to be considered interoperable and qualify for the OpenStack logo program. The guidelines aim to ensure a consistent user experience while allowing flexibility in implementations. The document also outlines the group's governance process and opportunities for participants to provide feedback to help improve interoperability standards over time.
Canary Analyze All The Things: How We Learned to Keep Calm and Release OftenC4Media
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1ph8Rq1.
Roy Rapoport discusses canary analysis deployment and observability patterns he believes that are generally useful, and talks about the difference between manual and automated canary analysis. Filmed at qconnewyork.com.
Roy Rapoport manages the Insight Engineering group at Netflix, responsible for building Netflix's Operational Insight platforms, including cloud telemetry, alerting, and real-time analytics. He originally joined Netflix as part of its datacenter-based IT/Ops group, and prior to transferring over to Product Engineering, was managing Service Delivery for IT/Ops.
HFM, Workspace, and FDM – Voiding your warrantyCharles Beyer
While these three products work well out of the box and offer plenty of functionality for end users, administrators and power users are always looking for ways to increase productivity and functionality of their tools. While some updates have introduced functionality for the administrators/power users such as LCM, there are plenty of areas that could be improved. Gathering system usage statistics, performing bulk import/export operations between development/production environments, improving data import/export, generating more useful security audit data, and improving system performance are all items that can be improved upon.
This presentation will provide viewers with a selection of real world “hacks” that they can apply to their environments. Viewers will first be presented with a low level technical discussion on how these products work and how they can leverage that knowledge. Fully working “hacks” are also attached at the end of the powerpoint.
This presentation outlines the philosophy, concepts and tools your team needs to completely test drive your products efficiently, from the front end down. It will define what unit tests and TDD are, and cover acceptance testing and ATDD with Cucumber, behavior driven development (BDD) and various test structures, mock objects, and fluent matchers.
The document discusses the role of automated testers in both waterfall and agile projects. It notes that while test automation was seen as essential in waterfall, the high costs of licenses, support, and maintenance often outweighed the benefits. In agile, test automation is more effective when developers implement and "pass" acceptance tests themselves rather than delegating testing to separate testers. Having dedicated automated testers risks silos, miscommunication, testers falling behind and an uncritical approach. The document argues developers should automate tests through BDD/ATDD to find defects early and refactor confidently. While testers can help pair and extend tests, developers don't need dedicated automated testers to write good automated tests.
Test driven infrastructure development (2 - puppetconf 2013 edition)Tomas Doran
The document discusses test driven infrastructure development. It describes issues with the current state where infrastructure changes are not repeatable and difficult to test. The speaker proposes modeling infrastructure as code where environments are defined programmatically and configuration is generated externally rather than defined directly in puppet code. This allows for entire environments to be provisioned on demand and tested in an automated and repeatable way. Key benefits include high availability, ability to test all infrastructure changes, fully repeatable environments, high confidence in changes, and continuous integration/deployment of infrastructure.
Similar to Testability is Everyone's Responsibility (20)
A coaching aid for those who want to help others achieve greater testability within their development team and wider organisation. Additionally can be used to track your own journey.
Testers Guide to the Illusions of Unit TestingAsh Winter
One area that testers might be able to enhance their contributions to software development teams is how we perceive and contribute to unit testing. Being able to influence this type of testing in a positive manner is a skill that testers will need to get to grips with, as more companies start to embrace a model of lone testers in cross functional teams. The shift of focus from primarily the testing that testers do, to the testing that the team does, is a key shift in thinking and behaviour.
To facilitate this shift, I believe testers busting their own illusions about this aspect of building something good would bring us much closer to developers and help us realise what other layers of testing can cover most effectively. The last point is pertinent here, as knowing and guiding unit testing brings the role of integration, acceptance and exploratory testing into sharp focus.
This is a topic that has always intrigued me, having predominantly worked as a single tester on a team for the last five or so years. I reached out to the community with the question “What do testers believe about unit testing?” and received a lot of engagement. The good users of Twitter added another 50 or so illusions that testers might have about this layer of testing. I figured that based on that level of engagement, maybe this would make an interesting talk! It wasn’t only testers who responded too, suggesting that there might be some shared illusions about unit testing that are cross disciplinary.
The list alone is interesting but now I would like to share my analysis of it with you, focusing on:
* Recurring themes within the list and how to address them as a tester or developer.
* Particular illusions to look out for with examples from my recent past.
* A guide for developers to engage with testers on unit testing, and testers with developers.
Lightning talk based on the 10 P's of Testability by Robert Meaney, talk designed by Ash Winter. Make your testing life better by embrace testability as a team.
One of my recent endeavours has been to create a "career model" for the testers within my organisation. I sat in my office at home and designed "the testing wheel". I wanted it to be simple, inclusive and offer questions but no answers.
Careers are long and winding. My own career has not been a linear progression, so building a linear model seemed wrong to me. But, this brought me into conflict with the ideas of others.
My experience report will show the highs and lows of this model after introducing it into the wild. People still tried to measure and rank with it, got mad people got when I refused to answer the questions it posed. Finally how it spread and entered battle with wider organisations career models.
A Testers Guide to the Myths, Legends and Tales of Unit TestingAsh Winter
One area that testers might be able to enhance their contributions to software development teams is how we perceive and contribute to unit testing. Being able to influence this type of testing in a positive manner is a skill that testers will need to get to grips with, as more companies start to embrace a model of lone testers in cross functional teams. The shift of focus from primarily the testing that testers do, to the testing that the team does, is a key shift in thinking and behaviour.
To facilitate this shift, I believe testers busting their own illusions about this aspect of building something good would bring us much closer to developers, and help us realise what other layers of testing can cover most effectively. The last point is pertinent here, as knowing and guiding unit testing brings the role of integration, acceptance and exploratory testing into sharp focus.
This document discusses testing infrastructure components that lie beneath applications. It defines infrastructure as the building blocks applications depend on, like hardware, virtualization, containers, and software. The author argues that testers should care about infrastructure testing because infrastructure problems are often found at the wrong level, and the product as a whole is symbiotic between applications and infrastructure. Various testing principles, examples, tactics, and tools are provided for both human/deterministic and machine/deterministic testing of infrastructure, as well as human/random and machine/random approaches.
Nobody /really/ likes change, its human nature. Testers have a special relationship with changing tools and techniques, they change and we tend to flounder a little and end up very nervous about our place in the new world. Continuous delivery is one such circumstance, I see and speak to many testers really struggling. However, with a significant shift in outlook and a chunk of personal development, testers can excel in environments such as these. It’s time to start to get out in front of a changing world, rather than always battling to catch up.
I want to share my experience of adding value as a tester in a continuous delivery environment, what new technologies and techniques I've learned, using your Production environment as an oracle, advocating testability and most crucially, not overestimating what our testing can achieve. Testing is not the only form of feedback, it’s time to let go of some the aspects of testing we cling to.
Continuous delivery adds richness and variety to our role as testers. To me, it is a facilitator for the autonomy and respect that testers have craved for a long time, so let’s get involved...
No one paying attention to your test strategy? Its too long. More crucially, it has no scrolls. Here's a template for the agile testing quadrants, but with scrolls for extra unforgettability.
One of my recent endeavours has been to create a "career model" for the testers within my organisation. I sat in my office at home and designed "the testing wheel". I wanted it to be simple, inclusive and offer questions but few answers.
Careers are long and winding. My own career has not been a linear progression, so building a linear model seemed wrong to me. But, this brought me into conflict with the ideas of others.
My experience report will show the highs and lows of this model after introducing it into the wild. People still tried to measure and rank with it, got mad people got when I refused to answer the questions it posed. Finally how it spread and entered battle with wider organisations career models.
This document discusses the author's experience consulting with various organizations to improve their testing capabilities. The author found that reviewing testing independently did not address root causes and that organizations often wanted quick fixes rather than systemic changes. The author eventually realized they needed to take a more holistic systems-thinking approach and focus on root causes rather than superficial solutions. They decided to focus more on systems thinking and become nomadic in their work.
The document discusses various topics related to software testing including testability, logging, environments, and feedback. It provides tips such as getting to know operations pain points, logging what matters without bloat, understanding past code intent, and focusing more on test flow and feedback rather than number of environments or testability. The document ends with inviting questions.
Pokémon GO faced several quality issues after its initial release such as app freezing, server overload causing scaling problems, inaccurate GPS, and device fragmentation affecting the ability to catch Pokémon. The document discusses lessons learned around testing and quality from these issues, including the need for a balanced testing approach with different types of testing like functionality, performance, compatibility, and usability testing. It also emphasizes that quality is multifaceted and requires continuously adding and improving features while focusing on the core idea.
This document discusses different perspectives on what testing is and provides the author's axioms about testing. The author believes that testing is a team-based activity where they help enable testing rather than doing most of the testing themselves. They view testing as a human, intellectual activity involving thinking, learning, sharing ideas. Complete testing is seen as impossible due to logical limitations and infinite possibilities, so balance and variation are important. Testing is considered a performance where the value is in applying it, not just thinking about it. Tools can assist testing but not replace it. Context is also important, as what works in one situation may not work in another.
This document provides instructions for collaboratively mindmapping an application using the online tool mindmup.com. Participants are instructed to get into groups of three, with one person creating a real-time collaborative mindmap session for their application and inviting the other two participants. They are then instructed to collaboratively map out the functions, forms, fields, views, integrations, and other aspects of the application to document its functionality and coverage for testing purposes.
This document discusses regression testing for a project to stabilize and upgrade the underlying technology of a system while maintaining normal service. It raises questions about how to test that nothing has changed when everything is changing, and whether changes to responsiveness and capacity would still be noticeable to customers. The regression testing involved risk modeling sessions with stakeholders to understand the system, exploring the system to determine what to test, and testing from the user interface down to the unit level. The results showed some issues were found and fixed, while other changes like increased speed and capacity were noticed by customers, raising questions about how to prevent customers noticing any differences after changes. It concludes that taking broad statements literally can be problematic, and that preventing all changes from being noticed may be
Coaching Model for Unrecognised Internal ModelsAsh Winter
This document proposes a coaching model to help testers recognize the testing models they already use intuitively and help them improve. The model focuses on using questioning rather than providing answers to guide testers to higher levels of thinking based on Bloom's Taxonomy. By getting testers to apply models through practice and reflection, and by iterating the model over time based on emergent needs, the coaching model aims to improve testing skills through collaborative learning.
The document discusses how too much testing can trap a team and product in a "death spiral". It describes a scenario where a team assembled to build a product but their testing strategy became ineffective and led to slower development and frustrated teams. The document then provides tips on how to avoid this, such as making sure unit tests actually test units, writing tests first, keeping dependencies loose, and evolving one's testing strategy so that testing serves the team rather than the other way around. It emphasizes that the goal is learning rather than failure.
The document discusses testing in a mobile context. It begins with a poll about mobile device and testing experience. It then tells a story about realizing the need to test mobile applications. Mobile testing was behind the curve, with early experts, dodgy tools, and outsourcing common issues. However, mobile presents an opportunity for testing as the same principles of testing still apply. The document lists several "axioms" or truths of testing that still hold for mobile, such as testing being a human activity and oracles being fallible. It argues that complete testing is impossible given device fragmentation, but test cases do not equal testing. Testing mobile is a performance in itself. The document concludes that bugs cost businesses and mobile is probably the
Critical Thinking for Consultants-ExternalAsh Winter
Critical thinking is important for testers as it helps them challenge assumptions, gather evidence to prove or disprove assumptions, and ask questions to improve their understanding. It is important for testers to think like a "Thinking Tester" by challenging beliefs, trying new things without fear of failure, seeing testing as an experiment to gather insights, and improving the product and project. When gathering data through questions, testers should understand stakeholders, consider different perspectives, and make it a collaborative and fun process. When analyzing data, testers should look at issues from different angles, understand limitations and biases, and determine root causes and risks. As consultants, testers need to apply critical thinking carefully and avoid antagonizing clients, while still facilitating
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
2. Your system is
hard to test
@northern_tester
Your testers are at
the sharp end
(And you should feel bad about it)
(I don’t know you but its true)
#expandconf
4. If its hard to test…
…it won’t get tested
…the tester will test it
…it won’t work
…you will drive your ops
people mad
…your team testing culture will
be a distant dream
@northern_tester #expandconf
5. Its about time that
testability became
everyone’s
responsibility
@northern_tester #expandconf
6. Coverage…
• Testability smells
• Testable architecture
• Power of operability
• Maintaining the mission
• Polished sales pitch
@northern_tester #expandconf
7. What is that subtle yet over
powering smell…
@northern_tester
Release
Management
Theatre
Mono
strategies
Fear of
change
Teams
looking for
more testers
Too many
user
interface
tests
Valuable
scenarios
not tested
Lack of
resilience
testing
“Sunny”
days only
Cluttered
logging
with no
insights
Excessive
test
repetition
Issues hard to
isolate and
debug
Tests that
don’t die
Lengthy
build
times
Too many
persistent
environments
Environments no
one cares about
Inanimate
documentation
Customer
hand
holding
Poor relations with
Ops
Long lists of
deferred bugs
Hard to test
dependencies
Wrangling
over scale
95th
Percentile?
Tester
turnover
#expandconf
8. That’ll be your system
architecture…
@northern_tester
(And your team
structures…)
No big deal
#expandconf
9. Get the right people in
the room…
@northern_tester #expandconf
10. The rooms where architectural
decisions are made…
@northern_tester #expandconf
11. But what do you do when
you get in the room…
@northern_tester #expandconf
16. After some hard work…
• A useful run book
• Isolatable services for load testing
• Less services to melt our brains
• Ask questions of the system
• Closer to production
• More sleep, less stress
• Better testing experience!
@northern_tester #expandconf
29. This is all very nice but sales…
@northern_tester
https://blackswanfarming.com/cost-of-delay/
https://martinfowler.com/bliki/TechnicalDebtQuadrant.html
• Predictable
• Known quality
• Testing debt
• Tests you trust
#expandconf
30. Sounds like a testers problem…
@northern_tester
• Fast
feedback
• Less context
switching
• Customers
• Claims
• Build the
right thing
• Resilience
• Configuration
• Observability
• Less toil
https://en.wikipedia.org/wiki/Margaret_Ha
milton_(software_engineer)
https://twitter.com/mipsytipsyhttps://twitter.com/lissijean
#expandconf
31. It affects us all, especially our
friends in operations.
Testability makes software
better. It is a collective
responsibility.
Testers are often seen as responsible for testability.
Not alone. They need advocates.
@northern_tester #expandconf
32. Thank you for
your attention.
http://leanpub.com/s
oftwaretestability
@northern_tester #expandconf
Editor's Notes
Some people say this, but it is slightly less vague than quality is everyone’s responsibility. Describes what a bit better and who might be doing it. Still hard to aim at while building hard to test systems.
We’ve all worked on hard to test systems. If you think you haven’t then you are or were in such denial that you were a captive reliant on their captor. Stockholm syndrome. So, lets drop some truth bombs on our candy asses.
Simply won’t get tested - here’s a secret for you. IF IT HASN’T BEEN TESTED IT DOESN’T WORK, new stuff rarely does.
You the tester will be burdened with it. Volunteers will be hard to find.
Automation will target the areas that don’t break, or will just cover new stuff, rather than old fragile areas.
Even if you can do performance and load testing, it will be brittle, late and on inappropriate environments. Probably mislead you more than lead you. One of the key indicators of poor testability is lack of diversity within your testing.
IMPORTANT - your poor ops people. Sys Admin, DBA’s and App Support will be driven mad by your application. Hard to test means hard to operate in live, where it matters.
Finally - your team will be divided by this system that you all hate, right down the lines of role. Unchecked, this will persist FOREVER.
Im not sure testing is really working out that well. We always seem to be in the middle of existential angst. Agile, DevOps, what shall we fail to embrace next? Lets change our footing.
Reverse inspiration
Core concepts
Testability Engineering
Its about YOU
2000 bugs in 2 years
Communicated through tickets
Longs test cycles against builds on long lived environments
Mastered weirdly named tooling “Quality Centre”
Left with a weird feeling - we did tons of testing, but we never got any faster, no one got what they wanted…
Superset as in, other ililities are contained within it
Which makes it ethereal at times which is part of its problem, it his hard to describe but makes the world better.
For me this is true of a lot of aspects of testing, where we co-opt other technologies to enhance our testing. One of the things that makes testability so intuitive as a direction for the craft of testing.
But also makes it important, it is telling us to focus on the whole system, rather than making local optimisations.
Let’s make this a bit more real, by talking about 4 core ilities of testability. In no particular order though.
Observability allows us to understand the system as it actually is - we can explore and ask questions of the system
Observability determines what problems we can detect and how we evaluate if they are problems
observability tools and techniques are the lens to view and filter that information.
Tracing through a micro service architecture is a great example of this. Seeing the whole transaction throughout a set of dependent services. Great for seeing effects and side effects of a behaviour.
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.
Reverse inspiration
Core concepts
Testability Engineering
Its about YOU
The more complex the system, the harder it is to test. Sounds intuitive right? The harder it is to reason around a system, how many technology types, transport mechanisms, input, outputs, dependencies it has, the more problems can occur. Lots and lots of problems means lots of time spent testing, clarifying, checking, asking, exploring, re-exploring. You get the picture.
## People
The people in our team possess the mindset, skill set & knowledge to do great testing and they are aligned in their pursuit of a shared quality goal.
### Mindset
Each member of the team feels motivated, fulfilled and is focused on delivering a high-quality product. Team members understand that quality is a whole team responsibility, appreciate that testing provides critically valuable feedback, strive to facilitate better testing, shorten the feedback loop and endeavour to prevent defects over finding them.
### Skillset
Each member of the team has the skills and experience necessary to perform risk analysis, exploratory testing, write unit, integration and end to end tests. The team also has access to a testing specialist with deep testing expertise should their expertise be required.
### Knowledge
Each member of the team either has adequate knowledge or has a means of accessing adequate knowledge of the problem domain, technical domain, testing tools and techniques required to do great testing.
### Alignment
No one individual on the team is responsible for quality, the team has a shared vision of quality and work together to build quality in, facilitate better testing and to improve the team's way of working.
## Philosophy
The philosophy of our team encourages whole team responsibility for quality while building trusting, collaborative relationships across team roles, the business and with the customer.
### Whole team responsibility for quality
All team members actively identify and mitigate risks, consider testability during architectural discussions, collaborate on testing, prioritise the investigation and resolution of automation failures over new feature work and distil as much learning as possible from customer impacting issues.
### Collaborative relationships
Team members work really closely making changes to the code to facilitate better testing as well as helping each other complete testing and automation tasks. Each team member talks regularly with people from the wider business and the customer in order to gain a better understanding of the stakeholders' needs.
## Product
The product is designed to facilitate great exploratory testing and automation at every level of the product.
### Designed to facilitate exploratory testing
Team members can quickly and easily set-up whichever test scenarios they wish to explore and evaluate whether or not the system is behaving as desired.
### Designed to facilitate automation
Team members can write fast, simple and reliable automation that is targeted at the appropriate level. The majority of the automation is written at unit and integration level with only a bare minimum written at end to end level.
## Process
The process helps the team recognise risk, decompose work into small testable chunks, and discourages the accumulation of testing debt while promoteing working at a sustainable pace.
### Recognise risk
Team members are encouraged to identify risks as early as possible so that they may be mitigated in the most appropriate manner.
### Small testable chunks
The team works together to create a shared understanding of what needs to be built and slices the work into small testable chunks with clearly defined acceptance criteria.
### Testing debt
Team members work together to ensure all the necessary testing activities are completed and findings are addressed before moving onto the next iteration.
### Sustainable pace
The team works together to ensure each chunk of work is adequately tested before moving onto new work. Overtime and out of hours work is actively discouraged.
## Project
The team is provided with the time, resources, space and autonomy to do great testing.
### Time
The team is provided with the freedom required to think, prepare and perform all the testing activities deemed necessary to mitigate the risks identified without being put under time pressure or working outside of normal working hours.
### Resources
The team has access to the information, test data, tooling, infrastructure, training and skills necessary to achieve their testing goals.
### Space
The team is provided with the space to focus on completing their testing tasks without too many distractions and minimal context switching.
### Autonomy
The team is given the autonomy to find their own solutions to testing challenges.
## Problem
The team has a deep understanding of the problem the product solves for their customer and actively identifies, analyses and mitigates risk.
### Customer problem
Each team member is constantly improving their understanding of who the customer is, what the customer values, their challenges, needs and goals. This knowledge enables team members to better recognise potential threats to the value of the solution.
### Risk
Team members have a deep understanding of their context which allows them to analyse business and technical risk, consider the potential impact of failure and mitigate it with the most appropriate techniques.
## Pipeline
The team's pipeline provides fast, trustworthy and accessible feedback on every change as it moves through each environment towards production.
### Feedback
The team members are confident that the various forms of automated testing provide comprehensive test coverage, detect functional regressions and provide feedback that's reliable, timely and actionable.
### Environment
The team can deploy a change into a production-like environment on demand and can safely perform a range of testing activities including resiliency testing, performance testing, exploratory testing, and so on.
## Productivity
The team considers and applies the appropriate blend of testing to facilitate continuous feedback and unearth important problems as quickly as possible.
### An appropriate blend of testing
The team works together to identify risk and take a holistic approach to mitigating risk using the appropriate combination of pre-production and production testing. The team uses a blend of targeted unit, integration, end to end, exploratory and nonfunctional testing to find problems as quickly as possible. These approaches are supplemented with the appropriate level of logging, monitoring, alerting and observability in production.
### Continuous feedback
The team breaks their work down into tiny testable chunks, pairs or mobs on coding, automation and testing tasks and seeks stakeholder feedback as early as possible.
## Production Issues
The team has very few customer impacting issues but when they do occur the team can very quickly recover.
### Customer impacting issues
The team uses an effective test strategy that ensures the majority of issues are either prevented or detected before escaping into production. This means that the team spends very little time firefighting customer impacting issues.
### Recovery
The team has built the system with monitoring and alerting that allows team members to detect production issues before they impact the customer. When issues are detected adequate logging, observability and reversibility is in place to quickly debug and remediate.
## Proactivity
The team proactively seeks to continuously improve their test approach, learn from their mistakes and experiment with new tools and techniques.
### Continuously improve
The whole team regularly reflects on how effective their test approach is, discussing activities that are valuable, wasteful or need improvement and taking action where necessary.
### Learn from their mistakes
The whole team reviews each costly mistake in an effort to distill as much learning as possible, to identify and address gaps in the team's testing efforts.
### Experiment
Each team member is encouraged to learn about testing tools, techniques and is supported in experimenting with new ideas that they believe may benefit the team.
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.
You will find the principles of testability engineering here