The document discusses various techniques for testing software, including their strengths and limitations. It begins by noting that while unit tests are useful for preventing regressions and ensuring something works, they don't provide much information when they pass and finding all possible test cases is impossible. Formal methods like regular expressions and finite state machines can help reduce the input space. Property based testing allows specifying properties that must always be true rather than specific test cases. The document advocates using a combination of techniques like typing, fuzzing, and formal methods alongside testing to provide more confidence in code correctness with fewer tests. The key is focusing on the goal of quality software rather than any single testing technique.
5 Considerations When Adopting Automated TestingBhupesh Dahal
Most organizations have realized the benefits of and need for test automation—but is your investment being wisely utilized? Are you unknowingly building a test automation suite that will end up costing more than your actual product? Are you building a legacy test automation framework that may be ready to retire before you reap the benefits?
This presentation will discuss five points of consideration that will help your organization answer these questions and mitigate risks that they bring into light.
There are many types of automatic tests, testing tools, libraries and approaches.
Automatic tests can save you a lot of stress but can also became a kind of a nightmare.
This presentation is an overview of what's available and how to use and not to use them to make them really useful.
Examples taken from PHP world. You might be surprised how many tools is available.
5 Considerations When Adopting Automated TestingBhupesh Dahal
Most organizations have realized the benefits of and need for test automation—but is your investment being wisely utilized? Are you unknowingly building a test automation suite that will end up costing more than your actual product? Are you building a legacy test automation framework that may be ready to retire before you reap the benefits?
This presentation will discuss five points of consideration that will help your organization answer these questions and mitigate risks that they bring into light.
There are many types of automatic tests, testing tools, libraries and approaches.
Automatic tests can save you a lot of stress but can also became a kind of a nightmare.
This presentation is an overview of what's available and how to use and not to use them to make them really useful.
Examples taken from PHP world. You might be surprised how many tools is available.
How to Add Test Automation to your Quality Assurance ToolbeltBrett Tramposh
SQA job postings are still in abundance, but it is rare to find one that does not include some form of test automation pedigree. Brett will present the topic and then lead the discussion as we explore the various paths to building your test automation acumen, and learn how to add this valuable skill-set to your resume. If you are already an SQA with test automation experience we encourage you to participate and bring your learning forward and into the discussion where we will compare and contrast Computer Science degrees, Code Camps, licensed automation tools such as HP UFT (QTP), test frameworks and scripting tools such as jMeter and SOAPUI. There is much to explore on this topic and we want everyone to leave with a few key areas they can start building on today.
With the creation of the cucumber framework came the creation of the Gherkin Scripting format (also known as the Given-When-Then format). The structure of a Gherkin script is very straight-forward: Given provides you with the background When tells you what is being created Then tells you the expected results. Writing a script in a Given-When-Then format may be fairly simple. Writing a good Gherkin Script is an Art. Some are Picassos, some are Monets, some look like they were created by a toddler with a crayon. In this presentation Mr. Eakin will offer some tips on writing good Gherkin Scripts and show you how a well crafted Gherkin Script can be a beautiful work of Art.
Testing As A Bottleneck - How Testing Slows Down Modern Development Processes...TEST Huddle
We often claim the purpose of testing is to verify that software meets a desired level of quality. Frequently, the term “testing” is associated with checking for functional correctness. However, in large, complex software systems with an established user-base, it is also important to verify system constraints such as backward compatibility, reliability, security, accessibility, usability. Kim Herzig from Microsoft explores these issues with the latest webinar on test Huddle.
Learn how you can use the Test Pyramid from Mike Cohn to guide how to scale your QA test automation while keeping it effective and fast. As your product and team scales it is really important to have a solid framework in place which allows your test automation to scale for the various layers of your product and your teams.
Lightning talk about Selenium WebDriver UI automation. Cross platform, cross browser. Code samples provided. Presented at YVR Testing meetup on May 4, 2016.
Tied to presentation by Klaus Salchner at http://www.slideshare.net/ksalchner/how-to-scale-your-test-automation
Reaching for Your Quality Stretch Goals: Testing at Realtor.comKlaus Salchner
A/B Testing
If you are not familiar yet, an introduction to A/B testing and how you can leverage this approach to truly measure customer impact before and after a change. It's a practice highly leveraged in the e-commerce and cloud space to truly measure the impact of a change and be able to iterate through it till you see the desired outcome.
Where in your stack to invest in test automation
This short talk will explain in which layer to invest in test automation and the pros and cons. Too many teams still invest heavily in automated UI testing which then results in large test automation suites once the platform grows while still not being able to catch critical quality issues before they reach customers.
Testing for reliability, resilience and recovery
Your customer experience is also impacted by how reliable your application is. How do you test for reliability. But also how do you build and test for resilience, as guaranteed reliability is unachievable and the closer you get the costlier it becomes. Lastly how do you test for recovery, so once an outage or partial outage happened how to you recover, and how do you prepare for that recovery.
Test Automation Improvement by Machine Learning Jasst'21 TokyoSadaaki Emura
We have challenging issue in test automation operation.
Test automation fail by bug and not bug reason.
I think one of the main reason is temporary accident.
When test automation fail by this temporary accident, test will be success by running test again usually.
This re-run operation is boaring task and big task for test automation operation
To eliminate this kind issue and improve operation, we built system categorize issue by machine learning, re-run test only when fail reason is temporary accident.
In this session , I show you these below
- What's test automation issue in daily operation
- How to resolve this issue. store big data, learning, system architecture etc.
- Actual result for improvement
Key takeaways
- Continuous “everything” is at the heart of agile and devops
- Continuous activities result in faster delivery and higher quality
- Rapid feedback and practice are essential for confidence in your delivery process
View webinar recording - http://testhuddle.com/resource/continuous-everything/
B4 u solution_writing test cases from user stories and acceptance criteriab4usolution .
Writing Test Cases From User Stories And Acceptance Criteria:
Overview user Stories
Overview Requirement
Overview Acceptance Criteria
Overview Test Cases
Page-Object pattern is very commonly used when implementing Automation frameworks. However, as the scale of the framework grows, there is a limitation on how much reusability really happens. It inherently becomes very difficult to separate the test intent from the business domain.
I talk about this problem, and the solution I have been using - Business Layer - Page-Object pattern, which has helped me keep my code DRY.
For more details (links to slides, etc.), see my blog: http://goo.gl/biRn4n
How to complement TDD with static analysisPVS-Studio
TDD is one of the most popular software development techniques. I like this technology in general, and we employ it to some extent. The main thing is not to run to extremes when using it. One shouldn't fully rely on it alone forgetting other methods of software quality enhancement. In this article, I will show you how the static code analysis methodology can be used by programmers using TDD to additionally secure themselves against errors.
How to Add Test Automation to your Quality Assurance ToolbeltBrett Tramposh
SQA job postings are still in abundance, but it is rare to find one that does not include some form of test automation pedigree. Brett will present the topic and then lead the discussion as we explore the various paths to building your test automation acumen, and learn how to add this valuable skill-set to your resume. If you are already an SQA with test automation experience we encourage you to participate and bring your learning forward and into the discussion where we will compare and contrast Computer Science degrees, Code Camps, licensed automation tools such as HP UFT (QTP), test frameworks and scripting tools such as jMeter and SOAPUI. There is much to explore on this topic and we want everyone to leave with a few key areas they can start building on today.
With the creation of the cucumber framework came the creation of the Gherkin Scripting format (also known as the Given-When-Then format). The structure of a Gherkin script is very straight-forward: Given provides you with the background When tells you what is being created Then tells you the expected results. Writing a script in a Given-When-Then format may be fairly simple. Writing a good Gherkin Script is an Art. Some are Picassos, some are Monets, some look like they were created by a toddler with a crayon. In this presentation Mr. Eakin will offer some tips on writing good Gherkin Scripts and show you how a well crafted Gherkin Script can be a beautiful work of Art.
Testing As A Bottleneck - How Testing Slows Down Modern Development Processes...TEST Huddle
We often claim the purpose of testing is to verify that software meets a desired level of quality. Frequently, the term “testing” is associated with checking for functional correctness. However, in large, complex software systems with an established user-base, it is also important to verify system constraints such as backward compatibility, reliability, security, accessibility, usability. Kim Herzig from Microsoft explores these issues with the latest webinar on test Huddle.
Learn how you can use the Test Pyramid from Mike Cohn to guide how to scale your QA test automation while keeping it effective and fast. As your product and team scales it is really important to have a solid framework in place which allows your test automation to scale for the various layers of your product and your teams.
Lightning talk about Selenium WebDriver UI automation. Cross platform, cross browser. Code samples provided. Presented at YVR Testing meetup on May 4, 2016.
Tied to presentation by Klaus Salchner at http://www.slideshare.net/ksalchner/how-to-scale-your-test-automation
Reaching for Your Quality Stretch Goals: Testing at Realtor.comKlaus Salchner
A/B Testing
If you are not familiar yet, an introduction to A/B testing and how you can leverage this approach to truly measure customer impact before and after a change. It's a practice highly leveraged in the e-commerce and cloud space to truly measure the impact of a change and be able to iterate through it till you see the desired outcome.
Where in your stack to invest in test automation
This short talk will explain in which layer to invest in test automation and the pros and cons. Too many teams still invest heavily in automated UI testing which then results in large test automation suites once the platform grows while still not being able to catch critical quality issues before they reach customers.
Testing for reliability, resilience and recovery
Your customer experience is also impacted by how reliable your application is. How do you test for reliability. But also how do you build and test for resilience, as guaranteed reliability is unachievable and the closer you get the costlier it becomes. Lastly how do you test for recovery, so once an outage or partial outage happened how to you recover, and how do you prepare for that recovery.
Test Automation Improvement by Machine Learning Jasst'21 TokyoSadaaki Emura
We have challenging issue in test automation operation.
Test automation fail by bug and not bug reason.
I think one of the main reason is temporary accident.
When test automation fail by this temporary accident, test will be success by running test again usually.
This re-run operation is boaring task and big task for test automation operation
To eliminate this kind issue and improve operation, we built system categorize issue by machine learning, re-run test only when fail reason is temporary accident.
In this session , I show you these below
- What's test automation issue in daily operation
- How to resolve this issue. store big data, learning, system architecture etc.
- Actual result for improvement
Key takeaways
- Continuous “everything” is at the heart of agile and devops
- Continuous activities result in faster delivery and higher quality
- Rapid feedback and practice are essential for confidence in your delivery process
View webinar recording - http://testhuddle.com/resource/continuous-everything/
B4 u solution_writing test cases from user stories and acceptance criteriab4usolution .
Writing Test Cases From User Stories And Acceptance Criteria:
Overview user Stories
Overview Requirement
Overview Acceptance Criteria
Overview Test Cases
Page-Object pattern is very commonly used when implementing Automation frameworks. However, as the scale of the framework grows, there is a limitation on how much reusability really happens. It inherently becomes very difficult to separate the test intent from the business domain.
I talk about this problem, and the solution I have been using - Business Layer - Page-Object pattern, which has helped me keep my code DRY.
For more details (links to slides, etc.), see my blog: http://goo.gl/biRn4n
How to complement TDD with static analysisPVS-Studio
TDD is one of the most popular software development techniques. I like this technology in general, and we employ it to some extent. The main thing is not to run to extremes when using it. One shouldn't fully rely on it alone forgetting other methods of software quality enhancement. In this article, I will show you how the static code analysis methodology can be used by programmers using TDD to additionally secure themselves against errors.
An Introduction to Prometheus (GrafanaCon 2016)Brian Brazil
Often what you monitor and get alerted on is defined by your tools, rather than what makes the most sense to you and your organisation. Alerts on metrics such as CPU usage which are noisy and rarely spot real problems, while outages go undetected. Monitoring systems can also be challenging to maintain, and overall provide a poor return on investment.
In the past few years several new monitoring systems have appeared with more powerful semantics and which are easier to run, which offer a way to vastly improve how your organisation operates and prepare you for a Cloud Native environment. Prometheus is one such system. This talk will look at the monitoring ideal and how whitebox monitoring with a time series database, multi-dimensional labels and a powerful querying/alerting language can free you from midnight pages.
Static analysis is most efficient when being used regularly. We'll tell you w...PVS-Studio
Some of our users run static analysis only occasionally. They find new errors in their code and, feeling glad about this, willingly renew PVS-Studio licenses. I should feel glad too, shouldn't I? But I feel sad - because you get only 10-20% of the tool's efficiency when using it in such a way, while you could obtain at least 80-90% if you used it otherwise. In this post I will tell you about the most common mistake among users of static code analysis tools.
Ever tried doing Test First Test Driven Development? Ever failed? TDD is not easy to get right. Here's some practical advice on doing BDD and TDD correctly. This presentation attempts to explain to you why, what, and how you should test, tell you about the FIRST principles of tests, the connections of unit testing and the SOLID principles, writing testable code, test doubles, the AAA of unit testing, and some practical ideas about structuring tests.
Developing a web or mobile application takes a lot of effort, but all that effort can go down the drain quickly if you improperly load test the application or completely skip testing. Load testing applications is important and a necessary step in the pre-production stage.
New applications, ones that have not yet made it to the production stage, likely don’t have a performance benchmark established. You don’t typically know what to expect with a new app, which is why before you do a larger load test on any application you first do some baseline testing. This will allow you to establish some benchmarks and pick out any performance issues before you place a larger load on the app. For example, if your app crashes with just five users, you have a problem. Look to the application architects to determine if any service level agreements have been set for the application during design.
Once you have done some baseline testing you are ready to load test your application to determine its performance levels under heavier load. Here are 5 essential tips for starting load testing on an application.
I regularly communicate with potential users who are worried about errors in C++ programs. Their worry is expressed in the following way: they try the PVS-Studio tool and start to write that it finds too few errors during tests. And although we feel that they find the tool interesting, still they their reaction is quite skeptical.
I regularly communicate with potential users who are worried about errors in C++ programs. Their worry is expressed in the following way: they try the PVS-Studio tool and start to write that it finds too few errors during tests. And although we feel that they find the tool interesting, still they their reaction is quite skeptical.
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...Applitools
** Full webinar recording: **
Two leading developers -- from Jira/ Atlassian and Pushpay -- shared their insights, tip, tricks, and best practices on how to maintain quality across the Dev-Test-Release cycle, without losing speed or coverage.
Talk 1: Reducing the Risk of Rapid Development and Continuous Delivery -- by David Corbett (Director of Engineering @ Pushpay)
In this talk, David showed us what goes on under the hood of Pushpay's development cycle.
He also talked about the ways in which Pushpay is empowering Dev and Test teams to be more autonomous, and prompting them to use advanced test automation tools & techniques, such as visual validation, in order to gain confidence in deploying many times each day.
Talk 2: Testing Hourglass at Jira Frontend -- by Alexey Shpakov (Sr. Developer - Jira Frontend @ Atlassian)
We often hear people talk about the testing pyramid.
In Jira Frontend, we talk about testing hourglass -- that means we expect our developers to be responsible for the whole lifecycle of the code -- starting from creating tests and finishing with running a 24/7 on-call.
In this talk, Alexey did a deep-dive into the various types of testing they have in Jira Frontend, and discussed the various tools that allow them to deliver Jira to customers in a low-risk manner.
(Video and code available at http://fsharpforfunandprofit.com/cap)
In this talk I'll look at a unusual approach to designing internal interfaces and external APIs -- a "capability-based" approach that takes the Principle Of Least Authority and applies it to software design.
When this approach is used, it produces a robust and modular design which captures the domain constraints, resulting in an API which is self-documenting and hard to misuse.
I'll demonstrate how to design and implement a capability based approach, how capabilities can be quickly combined and restricted easily, and how capabilities are a natural fit with a REST API that uses HATEOAS.
An ideal static analyzer, or why ideals are unachievablePVS-Studio
Being inspired by Eugene Laspersky's post about an ideal antivirus, I decided to write a similar post about an ideal static analyzer. And meanwhile think how far from being it our PVS-Studio is.
Similar to The Limits of Unit Testing by Craig Stuntz (20)
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
5. https://www.flickr.com/photos/filipbossuyt/21409291292/
Not impossible, though! Jumping over a bar 2 meters in the air isn’t easy, but it can be done if you’re prepared to work at it. Most people (product owners?) will be
unwilling to pay the price.
So if you want no defects, I’ll tell you how to do that. Cut most of your features. Then do it again.
6. 80/20
80/20 rule for software: If you cut 80% of the features, maybe 20% of users will notice.
7. Most software has far too many features.
This is the bottom of the third page of fifth tab of the options dialog for the Java plug-in. If you select the highlighted option, this instructs Java to not install malware on
your machine during security updates. Naturally, it’s de-selected by default.
So there is plenty of room to eliminate features.
8. What kind of bugs do
you want in your final,
QA approved product?
10. <BEEP!>
<BEEP!>
And then we decided testing was good.
And then people said we should test all the …. time.
And from then on our software was perfectly reliable and secure. We can all go home.
That’s the end of my presentation, thanks for coming….
11. Now it turns out that fuzzing software makes security bugs jump out at you in a way that tests never will.
12. Now it turns out that static analysis makes resource leak bugs jump out at you in a way that tests never will.
Now it turns out that…
Wait. This is getting complicated.
What to do?
13. Agenda
• Whole project quality
• Goal Driven
• Realistic
I’m interested in building correct software. Sometimes people start by writing this off as impossible.
It’s easier to dismiss something as impossible than to ask if you can bite off a big chunk of it.
Whole project quality - not just individual pieces of testing piled on top of each other
Goal driven - Other techniques complement testing to find errors that unit tests can’t find
Realistic - These methods are useful on real-world software, today
14. https://www.flickr.com/photos/taylor-mcbride/3732682242/
It turns out the QA landscape is huge and there are some beautiful techniques available that you can combine to implement a realistic plan for achieving a desired level of
quality.
The biggest danger that will stop you from getting there is looking to just one technique to solve all your problems. Focus on the goal, not the mechanism.
15. Immediate
Digression
Manual Testing
Really useful, but doesn’t fit the theme of the rest of the talk.
Still, really useful, so let’s talk anyway!
How is manual testing fundamentally different than unit tests, automated tests, etc.?
16. Sometimes we think of manual testing as poking weird values into inputs. And hey, it works sometimes: Both Android and iPhone lock screens broken by “boredom
testing.” But computers can do this faster.
The best application for manual testing: What is something that computers can never do by themselves today?
18. https://lobste.rs/s/fdmbn5
For the rest of this presentation I’m going to talk about tests performed by a computer.
For many people, unit tests are both a design methodology and the first line of defense against bugs.
19. Let’s Write a parseInt!
let parseInt (value: string) : int = ???
Because I’m a NIH developer, and because it’s a really simple example to play with, I’ll write my own parseInt. It’s simple, right? Maybe too simple to say anything
worthwhile?
20. Test First!
[<TestMethod>]
member this.``Should parse 0``() =
let actual = parseInt "0"
actual |> should equal 0
But I believe in test first and TDD, so… What sort of tests do I need for parseInt?
This looks like a good start. Of course, this test does not pass yet, because I haven't implemented the method. That failure is an important piece of information! If I can’t
parse 0, my parseInt isn’t very good.
So let's say that I go and implement some parseInt code. At least enough to make the test pass. Now, this test tells me very little about the correctness of the method.
That's interesting! Implementing the method removed information from the system! That seems really weird, but…
21. Test First!
[<TestMethod>]
member this.``Should parse 0``() =
let actual = parseInt "0"
actual |> should equal 0
[<TestMethod>]
member this.``Should parse 1``() =
let actual = parseInt "1"
actual |> should equal 1
Maybe I should add another test.
Am I missing anything?
22. Test First!
[<TestMethod>]
member this.``Should parse -1``() =
let actual = parseInt "-1"
actual |> should equal -1
[<TestMethod>]
member this.``Should parse with whitespace``() =
let actual = parseInt " 123 "
actual |> should equal 123
23. Test First!
[<TestMethod>]
member this.``Should parse +1 with whitespace``() =
let actual = parseInt " +1 "
actual |> should equal 1
[<TestMethod>]
member this.``Should do ??? with freeform prose``() =
let actual = parseInt "A QA engineer walks into a bar…"
actual |> should equal ???
Anything else? null, MaxInt+1, non-%20 whitespace, MaxInt, MinInt, 1729?
I’m starting to realize I have more questions than answers!
24. More Questions
• Is this for trusted or non-trusted input?
• Can I trust that my function will be invoked
correctly?
• What is the culture of the input?
1) Trusted = exception; untrusted = fail gracefully.
2) For a private method, maybe. For a library function, no! Need tests per invocation?
3) , $, etc.?
It sounds like we might need a lot of tests. How many? Does it seem weird that we’re talking more about corner cases than “success?” Does this teeny little helper
method really need to be perfect? I just wanna parse 123!
25. Getting one digit wrong really can get your company into the headlines.
Also, what about security sensitive code. Hashes, RNGs.
Does it seem like test case suggestions focused on error cases? Even if 90% of the time we get expected input, I’m far more interested in the reasons which explain 90%
of the failures.
26. Bad Error
Handling Kills
“Almost all catastrophic
failures (92%) are
the result of incorrect
handling of non-fatal
errors explicitly
signaled in software.”
https://www.usenix.org/conference/osdi14/technical-sessions/presentation/yuan
Only tested software designed for high reliability. (Cassandra, HDFS, Hadoop…)
“But it does suggest that top-down testing, say, using input and error injection techniques, will be challenged by the large input and state space.”
27. Simple Testing Can Prevent Most Critical Failures, Yuan et. al.
92% of the time the catastrophe was caused not by the error itself but rather the combination of the error and then handling it incorrectly!
28. How Can I Be
Completely Confident
in a Simple Function?
(Or at least do the right thing when it fails)
(And also insure it’s always called correctly)
(Every. Single. Time)
Let’s face it, this is the bare minimum first step for trusting an application, right?
You might ask, “Why is this idiot rambling on about parseInt? I have 10 million lines of code to test.” I think it’s sometimes informative to start with the simplest thing
which could possibly work.
29. Unit Tests
• Helping you think
through bottom-up
designs
• Preventing
regressions
• Getting you to the
point where at least
something works.
Are Great
• Showing overall
design consistency
(top-down)
• Finding security holes
• Proving correctness
or totality of
implementation
Not So Helpful
We can use techniques like strong typing, fuzzing, and formal methods to compliment testing to give more control over code correctness. You will still need tests, but
you’ll get much more “coverage” with fewer tests.
Looking at the lists here, a theme emerges. To write a test, you needed a mental model of how your function should work. Having written the tests, however, you have
thrown away the model. All that's left are the examples.
30. When My Test
Fails
I know I’ve found a bug
(useful!)
Passes
I know my function works for at
least one input out of billions
(maybe useless?)
Does this make sense to everyone? Do you agree that a passing test doesn’t tell you much about the overall quality of the system?
Is there a way to ensure we always get correct output for any input?
Yes, but before we even get there, there’s a bigger problem we haven’t talked about yet.
31. How Can I Be Completely
Confident When
Composing Two Functions?
(Composing two correct functions should produce
the correct result.)
(Every. Single. Time)
Let’s face it, this is the bare minimum second step for trusting an application, right?
More generally, I would like to be able to build complete, correct programs from a foundation of correct functions. Now verifying my 10 million lines of code is easy; start
with correct functions, then combine them correctly!
32. parseIntAndReturnAbsoluteValue = abs ∘ parseInt
If I have two good functions, like abs and parseInt, I would like to be able to combine them in order to produce a correct program.
But there’s a problem: parseInt, as written, isn’t total (define). I can call it with strings which aren’t integers, and it’s really hard to use tests to ensure I call it correctly
100% of the time. How do I know it will always return something useful?
33. let parseInt (str) =
!" implementation
One thing I need to do is ensure that people call my function passing a string as the argument, and that the thing it returns is actually an integer, in every case.
34. let parseInt (value: string) : int =
!" implementation
That’s not too hard. I can prove this with the type system.
As long as I don’t do anything which subverts the type system (unsafe casts, unchecked exceptions, null — or use a language which won’t allow it!), I can at least be sure
I’m in the right ballpark.
But how do I ensure I’m only passed a string representing an integer? Or should I? Can I force the caller to “do the right thing” and handle the error if they happen to
pass a bad value.
35. public static bool TryParse(
string s,
out int result
)
{
!!.
}
Again, you can do it with the type system! I’m showing a C# example here, since the idiomatic F# solution is different.
36. public static bool TryParse(
string s,
out int result
)
{
!!.
}
!" appropriate when input is “trusted”
int betterBeAnInt = ParseInt(input);
!" appropriate for untrusted input
int couldBeAnInt;
if (TryParse(input, out couldBeAnInt))
{
!" !!.
It is now difficult to invoke the function without testing success. You have to go out of your way. This probably eliminates the need to use tests to ensure that every case
in which this function is invoked checks the success of the function.
Consider input validation. Bad input is in the contract. Exceptions inappropriate. Instead of returning an integer, return an integer and a Boolean.
37. But There’s Still The
Matter of That String
Argument
We can prove that we do the right thing when our parseInt correctly classifies a given input value as a legitimate integer and parses it, or rejects it as invalid, but how can
we show that we do that correctly? Aren’t we back at square one? Types are super neat because you get this confidence essentially for free, and it never fails, but even
the F# type system can’t make sure I return the right integer.
38. State Space
0
}1 {
A
B
In principle, your app, or your function, is a black box. Same input, same output. Easy to test, right?
This application should have only two possible states!
To be totally confident in your system you need to test, by some means, the entire state space (LoC discussion).
39. State Space
“Hello”
}“World” {
A
B
⚅ 🕑
It gets harder quickly. If my inputs are two strings instead of two bits, I now have considerably more possible test cases!
(Click) In the real world, you have additional “inputs” like time and randomness, and whatever is in your database.
40. Formal Methods
Using formal methods means the design is driven by a mathematical formalism. By definition, this is not test driven development, although you will probably still write
tests. Formal methods are sometimes considered controversial in the software development community, because they acknowledge the existence and utility of math.
41. ____ + 1234 ____
[ t]*[+-]?[0-9]+[ t]*
It’s easier to use formal methods if there’s an off-the-shelf formalism you can use. For the problem of parsing, these exist!
One way to reduce the input domain of the parseInt function from an untestably large number of potential states is to use a regular expression. This is not the sort of
regular expression you might encounter in Perl or Ruby; it is a much more restricted syntax typically used on the front end of a compiler. The important point, here, is that
we can reduce the number of potential state of the function to a number that you can count on your fingers.
42. 0
1
2
3 4
[ t]
[+-]
[0-9]
[0-9]
[0-9]
[ t]
[+-]
REs convert to FSMs.
3+4 are accepting states.
4-5 states, 2 of them accepting, well less than “any possible string!”
43. Totality checking. Breaking my vow to avoid showing implementations.
Lots of code here, but the important word is at the top.
I’ve hesitated about showing implementations until now, but I can’t avoid it here, because…
The proof is built into the implementation
44. When
My
Test Type Checker
Fails
I know I’ve found a
bug
(useful!)
I might have a bug
(sometimes useful,
sometimes
frustrating)
Passes
I know my function
works for at least one
input out of billions
(maybe useless?)
There is a class of
bugs which cannot
exist
(awesome!)
We can expand this chart now.
Tests and types are not opponents; they complement each other.
Where one succeeds, the other fails, and vice versa.
45. Property Based
Testing
Still, there are cases where it’s hard to use formal methods.
Not every problem has an off-the-shelf formalism ready to use.
But we don’t have to just give up and accept unit tests as the best we can do!
46. let parsedIntEqualsOriginalNumber =
fun (number: int) !→
number = parseInt (number.ToString())
> open FsCheck;;
> Check.Quick parsedIntEqualsOriginalNumber;;
Falsifiable, after 1 test (1 shrink) (StdGen
(1764712907,296066647)):
Original:
-2
Shrunk:
-1
val it : unit = ()
>
Can you state things about your system which will always be true?
What must be true for my system to work?
Looks like I have to do some work on my implementation here!
Important: I didn’t have to specify the failing case, as I would with a unit test. FsCheck found it for me. In unit testing, you start with a mental model of the specification,
and write your own tests. With property based testing, you write down the specification, and the tests are generated for you.
47. PBT: Great for helping to find bugs in specific routines.
Fuzzing: Great for finding unhanded errors in entire systems.
49. Runtime Validation
Sometimes the most important value to test is the only one that matters to you at runtime.
Assertions are a little under-used, because we tend to think of them as checking trivial things.
But using the techniques of property-based testing, we can do end to end validation of our system.
50. let input = " +123 "
let number = parseInt input !" 123
let test = number.ToSting() !" "123"
if test <> input !" true!
then
let testNumber = parseInt test !" 123
if number <> testNumber !" false (yay!)
then failwith "Uh oh!"
!" We’re safe now! Use number…
Similar to property based testing
52. The Quality Landscape
• Manual testing
• Integration tests
• Unit tests
• Runtime validation
• Property based testing
• Fuzzing
• Formal methods
• Static analysis
• Type systems
• Totality checking
The long and the short of it: Think big! Don’t “test all the ___ing time” because somebody told you to. Keep your eyes on the prize of software correctness.
Ask yourself which things are most important to the overall quality of your system. Pick the tool(s) which give you the biggest return.
Synopsis of each.