This document outlines an introduction to unit testing presentation. The presentation covers the reasoning behind unit testing, what unit testing is, benefits like better design and reduced costs, and how to write better tests. It also discusses test anatomy, demos writing tests, digging deeper into testing strategies, what to test, test coverage, and types of unit tests.
Everyone has seen smelly code that makes you cringe, but some smells are more subtle than others. Developing your code nose to recognize smells is a crucial skill for code craftsmanship. In this session we'll learn how to identify and fix smells like Feature Envy, Shotgun Surgery, and Indecent Exposure. Improve your olfactory perception for sweeter smelling code!
A code review is basically a technical discussion which should lead to improvements in the code and/or sharing
knowledge in a team. As with any conversation, it should have substance and form.
What’s involved in a good code review? What kind of problems do we want to spot and address? Trisha Gee will talk
about things a reviewer may consider when looking at changes: what potential issues to look for; why certain
patterns may be harmful; and, of course, what NOT to look at.
But when it comes to commenting on someone’s work, it may be hard to find the right words to convey a useful message
without offending the authors - after all, this is something that they worked hard on. Maria Khalusova will share
some observations, thoughts and practical tricks on how to give and receive feedback without turning a code review
into a battlefield.
How to do code review? What is the process? What to look for? How to keep a good vibe and improve the team?
These and many other questions are solved for good in this presentation.
Everyone has seen smelly code that makes you cringe, but some smells are more subtle than others. Developing your code nose to recognize smells is a crucial skill for code craftsmanship. In this session we'll learn how to identify and fix smells like Feature Envy, Shotgun Surgery, and Indecent Exposure. Improve your olfactory perception for sweeter smelling code!
A code review is basically a technical discussion which should lead to improvements in the code and/or sharing
knowledge in a team. As with any conversation, it should have substance and form.
What’s involved in a good code review? What kind of problems do we want to spot and address? Trisha Gee will talk
about things a reviewer may consider when looking at changes: what potential issues to look for; why certain
patterns may be harmful; and, of course, what NOT to look at.
But when it comes to commenting on someone’s work, it may be hard to find the right words to convey a useful message
without offending the authors - after all, this is something that they worked hard on. Maria Khalusova will share
some observations, thoughts and practical tricks on how to give and receive feedback without turning a code review
into a battlefield.
How to do code review? What is the process? What to look for? How to keep a good vibe and improve the team?
These and many other questions are solved for good in this presentation.
Why Automated Testing Matters To DevOpsdpaulmerrill
“Automated testing is a pain in my ear! Why can’t QA get it right? Why do the tests keep breaking? And for Pete’s sake, stop blaming the infrastructure!”
…Ok, maybe you chose a different word than “ear”.
How often do you have thoughts like this? Daily?
Let’s talk about these frustrations, why they exist and how we can use them to improve our systems!
In this talk, Paul Merrill, founder and Principal Automation Engineer at Beaufort Fairmont explores why automated testing matters to DevOps. Join us to learn how automated testing can be a useful tool in the creation and release of your systems!
TestDriven Development, Why How and SmellsProwareness
These are the slides used in our Mastering Agile Development session in September 2012. It gives some insights into the why, how and smells of doing TestDriven Development
Test Driven Development - a Practitioner’s PerspectiveMalinda Kapuruge
Guest lecture at Swinburne University of Technology, Melbourne. We introduced TDD concepts to students. We also did a live interactive demo with students to understand benefits of TDD.
Finally, we discussed benefits as well as pitfalls from a practitioner's point of view.
Adventures of a developer who suddenly found himself in the role of head of QA.
Presentation as held at ACCU 2010 (I think - had to recreate it from the last draft after I lost my laptop on the Eurostar in the ensuing volcanic chaos on the way back from Oxford).
"Challenges Faced by Testers Working on Agile Teams" by Aldo RallIndigoCube
"Challenges Faced by Testers Working on Agile Teams" by Aldo Rall
As a tester, moving into an Agile team can be frustrating and difficult. Often times leaving testers disillusioned and projects suffering due to a lack of quality.
In this talk, Aldo Rall will be looking at the typical challenges that testers face when moving into the Agile world, and touch on some key points that needs consideration for testers to successfully adapt in this new and often strange world called Agile.
Creating change from within - Agile Practitioners 2012Dror Helper
Faced with management that do not care about "being agile" what can a single developer do? Quite a lot!
Every developer has the power to improve the organization he works in in small iterative steps – and I can show you how.
If you want to make the change and don't know where to start – look no further, in this session I'll share my experience and show a few tips and tricks I learnt. As well as discuss the do and don'ts that can make all the.
- How to be agile developer in a waterfall company.
- Influencing people without formal authority.
- Using the right practices that makes the difference
- How to avoid alienating people
- Discovering your allies
- Know when to fight and when to "retreat" and cut your losses
- Making a change without disrupting the daily routine
- What being an agile evangelist is all about
Gain a deeper understanding of what Exploratory Testing (ET) is, the essential elements of the practice with practical tips and techniques, and finally, ideas for integrating ET into the cadence of an agile process
A brief introduction to test for the non-tester. Can be used for both business and development, although it is primarily focused on developers and persons interested in becoming testers.
Generating unit tests based on user logsRick Wicker
Developing without automated testing is hard and risky. Making legacy code testable is hard. Try to improve your logging, add a data layer and see if you can drive testing from user behavior.
Architectural Testability Workshop for Test Academy BarcelonaAsh Winter
Workshop delivered at Test Academy Barcelona on 30th January 2020. Including the Team Test for Testability, Testability Tactics, Testing Smells and the CODS Model.
Why Automated Testing Matters To DevOpsdpaulmerrill
“Automated testing is a pain in my ear! Why can’t QA get it right? Why do the tests keep breaking? And for Pete’s sake, stop blaming the infrastructure!”
…Ok, maybe you chose a different word than “ear”.
How often do you have thoughts like this? Daily?
Let’s talk about these frustrations, why they exist and how we can use them to improve our systems!
In this talk, Paul Merrill, founder and Principal Automation Engineer at Beaufort Fairmont explores why automated testing matters to DevOps. Join us to learn how automated testing can be a useful tool in the creation and release of your systems!
TestDriven Development, Why How and SmellsProwareness
These are the slides used in our Mastering Agile Development session in September 2012. It gives some insights into the why, how and smells of doing TestDriven Development
Test Driven Development - a Practitioner’s PerspectiveMalinda Kapuruge
Guest lecture at Swinburne University of Technology, Melbourne. We introduced TDD concepts to students. We also did a live interactive demo with students to understand benefits of TDD.
Finally, we discussed benefits as well as pitfalls from a practitioner's point of view.
Adventures of a developer who suddenly found himself in the role of head of QA.
Presentation as held at ACCU 2010 (I think - had to recreate it from the last draft after I lost my laptop on the Eurostar in the ensuing volcanic chaos on the way back from Oxford).
"Challenges Faced by Testers Working on Agile Teams" by Aldo RallIndigoCube
"Challenges Faced by Testers Working on Agile Teams" by Aldo Rall
As a tester, moving into an Agile team can be frustrating and difficult. Often times leaving testers disillusioned and projects suffering due to a lack of quality.
In this talk, Aldo Rall will be looking at the typical challenges that testers face when moving into the Agile world, and touch on some key points that needs consideration for testers to successfully adapt in this new and often strange world called Agile.
Creating change from within - Agile Practitioners 2012Dror Helper
Faced with management that do not care about "being agile" what can a single developer do? Quite a lot!
Every developer has the power to improve the organization he works in in small iterative steps – and I can show you how.
If you want to make the change and don't know where to start – look no further, in this session I'll share my experience and show a few tips and tricks I learnt. As well as discuss the do and don'ts that can make all the.
- How to be agile developer in a waterfall company.
- Influencing people without formal authority.
- Using the right practices that makes the difference
- How to avoid alienating people
- Discovering your allies
- Know when to fight and when to "retreat" and cut your losses
- Making a change without disrupting the daily routine
- What being an agile evangelist is all about
Gain a deeper understanding of what Exploratory Testing (ET) is, the essential elements of the practice with practical tips and techniques, and finally, ideas for integrating ET into the cadence of an agile process
A brief introduction to test for the non-tester. Can be used for both business and development, although it is primarily focused on developers and persons interested in becoming testers.
Generating unit tests based on user logsRick Wicker
Developing without automated testing is hard and risky. Making legacy code testable is hard. Try to improve your logging, add a data layer and see if you can drive testing from user behavior.
Architectural Testability Workshop for Test Academy BarcelonaAsh Winter
Workshop delivered at Test Academy Barcelona on 30th January 2020. Including the Team Test for Testability, Testability Tactics, Testing Smells and the CODS Model.
Two-thirds of Americans carry a smartphone. Smartphones have an enormous amount of information about us, but can they be trusted to keep their secrets? How do we verify what our phones are telling others? Learn how to investigate the data traffic your phone is sending out, what traffic patterns to look for, and what to do if you uncover an issue.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
5. What is Unit Testing?
“In computer programming, unit testing is a method by which
individual units of source code are tested to determine if they
are fit for use. A unit is the smallest testable part of an
application” – Wikipedia
(emphasis added)
6. Why Unit test?
• Better Design
• Easier change
• Living documentation
• Reduces cost
Why do we test things?Integration testingUA testingStress testing
The idea is to write small bits of code that exercise your production code in specific ways.
Drives higher fidelity requirements.Growing suite of smoke testsRefactor confidence Bug Regression testing – bug found, create test that replicates bug, fix the bug, test should pass, test lives on as regression test.Requirement change “tripwire” – notifies if changes (planned or unplanned) causes test failure“Bug found earlier costs less” – time spent writing tests comes “from” time spent on debugging.
Arrange, Act, Assert (AAA)Get everything setupPerform the action you’re interested inVerify the results match the expectations
The pithy answer is everything. Which is terrible advice. Everything coverage may be justified for you and your business, but unless you are designing life support systems for NASA, I doubt it. Everything coverage is high cost (time) and slows development. If the risks warrant that level of mitigation, so be it, but don’t bite that off lightly.Should you test the framework? Again, maybe, but probably not.I feel very comfortable saying that anything you use from a System.* dll has already been better tested than1. Anything you’ll need2. Anything you could add.3. Any code you’ve already written.I’m not saying they’re bug free. I’m saying that the likelihood that you’ll find a bug is low and the likelihood that you’d find that bug in a unit test is even lower.
Where do you think we start for greenfield?New CodeExisting Code
Test as you goTest Driven Development (TDD)Behavior Driven Development (BDD)
Test as you go! - Bug tests- Touched code gets tests from now on (new dev or updates)Test the important bits. Bang for your buck.But what is important?
At every level of a project, unit test coverage should be directly proportional to the likelihood of change, and the criticality of the system.This isn’t a math equation or strict rule, but you can think of it like one. Zero chance of change shouldn’t mean you never test a critical system. It is just a feeling you can use. Importance - Do you have a core subsystem that literally everything else depends on? Unit test the living daylights out of it. Likelihood of change - Do you have a critical input that comes from a third party outside your influence/control? Test it to death. - Is there an oft-updated bit that is constantly evolving due to user feedback?You can even invent a quick scoring system if you’d like. But the point is start with the meaty bits of the solution, project, subsystem, class, etc. and work from there.
Quantity is not Quality – Quality over coverageCovered code != well tested codeCode Coverage metricOnly useful for highlighting untested codeNOT a measure of progress/completion/confidenceUsing Test Coverage as anything gets the logic wrong. Logic is one way.BMI – obese person will have high BMI, but high BMI doesn’t mean obese.
Indirect input – anything provided to a SUT that isn’t a parameter in the invocation (e.g. database records)Indirect output – anything provided BY a SUT that isn’t returned to the caller.State verification –”at the end”. Inspect state after SUT is exercised and compare to expectation. e.g. Asserting that the DeletedDate property is set after you soft-delete an object.Behavior Verification – asserting that something happens. Especially useful for indirect outputs, void returns, etc. capture the indirect outputs as they occur and compare them to expected behavior. Example, Asserting that the SaveChanges() method was called once and only once as part of the aforementioned soft-delete.Delta – form of state verification – comparing pre- and post- state. Asserting a certain change occurred.Guard – Fails the test if a condition isn’t satisfied. For example, a Guard assertion could fail a test if an If statement branch you didn’t want exercised was called.
Sweet Spot A – this is pushing your code-level design and principles, SRP, DRY, YAGNI, etc.Sweet Spot B – This is your regression testing, behavior assertion, etc.
Positive Testing - Does this produce the expected output when given good inputs?Negative Testing - Does this produce the expected output when given bad inputs?Exception Testing: Does this fail gracefully if an exception is encountered? Most important for data integrity issues.Boundary Testing: Pushing to the limits. Stress/Performance testing, full Unicode character sets, max string length, etc.Bug Testing: Write tests (may be integration tests) that recreate a bug, then fix the code until the bug goes away. Test lives on as a regression test.
No Conditionals (“if” or “switch”)No loops (“do”, “while”, “for”, “foreach”)No Exception catchingTo test code branches means multiple testsTesting Exception behavior is different
We’re testing that the HasMultipleAccountsMethod is working properly.How do you run this twice?If it fails, which condition did it fail?What controls if the instance has multiple accounts?
We control the inputs, so we can be sure that the conditions are the same for every execution.Different branches are tested with different test vector.
Same results every time, assuming no code changed.Avoid things like Random,DateTime.Now()Avoid External systems.Consistent - If you run it 1000 times, it should always give you the same result. Different configuration files, different machines, etc. This is tempting. It seems like using Random would force you to write tests that truly flex the system. You can’t write a test that only works for the narrow dataset you give it if you’re using random. But Random truly is Random.
Will fail ~20% of the time.
In unit tests, readability is king. In production code, sometimes you have to make decisions that hurt readability (e.g. performance tuning).But unit tests always, always, always err on the side of readability. If you have bad performing unit tests, you have other issues. You don’t tune a unit tests.Remember, future-you will want different information than present you. You want to see different info when you’re getting it working for the first time, when it is passing, when it breaks, when you refactor… etc.Self-DescriptiveReadability is kingUnit tests are living documentationDocs for developersBehavior specifications which are always up to date (unlike comments).Includes:Test organizationNaming (classes, methods, variables, etc.)Simple CodeInformative assertion messages
Atomic – Keep Tests SmallOnly two possible results: Pass or Fail. No partial successes.If you need to run the debugger and step into a unit test to figure out why/where a test is failing, yourtest probably isn’t atomic.No more than a handful of assertions. Ideally one.Avoid gobs of assertions in your tests, as this leads to needing the aforementioned debugger.Note that this does not preclude using data tests where the same method is executed repeatedly by the framework with different inputs.
The static factory method is creating randomized instances, so for this test we have a conditional loop to keep creating them until we get the kind of random instance we want.Is this going to be fast? Maybe, maybe not. Depends on the number of loops we need. Is this self-descriptive? Not in the slightest. What is this testing?How about Deterministic? Will this run the same time every time? Sort of. The while loop should ensure that the rest of the method passes, but the exact path will vary on every execution. Is it Atomic? Sort of. This is a fairly small test, but you can easily imagine a more convoluted setup where you have to setup a lot of dependencies and whatnot. I would say the number of assertions violate the atomic principal. You’ve got 5 assertions on two objects.If this fails, will you immediately no why? Ideally you shouldn’t even need to look at the test to know what happened. If the test is properly descriptiveMultiple assertions are okay so long as the test preserves…
Each test is responsible for one scenario onlyMay need multiple assertions to fully verify.Balance with Simple characteristic.Multiple assertions are fine so long as they verify the same behavior and don’t violate the atomicity. If you need the debugger to understand how/why a test failed, bad test.A single behavior that spans multiple methods (private methods, properties, etc.) can be tested with one test.A single method that has multiple behaviors should be tested with multiple tests.Wiki - “A unit is the smallest testable part of an application” NOT a single method or class.
Environment – this sort of extends consistentThe success/failure of a test shouldn’t depend on the state of an external system, like a database.Other TESTS - The success/failure of a test shouldn’t depend on other TESTSBe mindful of instance variables. Different frameworks do different things.Other classes – isolate from dependencies. Mock dependencies. we’ll come back to this.A moment on ordered tests.You can create ordered tests, in which a prescribed execution order is preserved. This can very easily lead to some bad tests. I’ve seen it used once in a way that may be acceptable – to bridge atomicity with SRP. A series of tests for a single behavior were ordered so that their assertions worked progressively deeper into the behavior and each test was very atomic and self-descriptive, yet all were testing a single behavior. The Ordered test preserved the narrativeYou can also argue that the behavior being tested should be refactored so this wasn’t needed, in which case ordered tests really only belong in integration testing.
Never deployed to productionNo “test hooks”You will see .ctor overloads for testing. bad.
ExecutionEvaluationSummarizationResults distributionIf it is a manual process, it won’t happen.Gated buildsRun tests on buildsEtc.Let the framework do the lifting for you.Results – lot of options here. RSS, email, SMS, TFS work items, whatever your process is, automate it.
Test suites should be kept “per business module”Tests follow the tested code If you share a project between solutions, the tests should go with that project and be executable without other projects.
Indirect input – anything provided to a SUT that isn’t a parameter in the invocation (e.g. database records)Indirect output – anything provided BY a SUT that isn’t returned to the caller.Eliminates testing/environment/experiment variables, not programming variables.
Everyone has different terms.xUnit Patterns Terminology http://xunitpatterns.com/Mocks,%20Fakes,%20Stubs%20and%20Dummies.htmlDummies are never actually used. Just passed around filling out parameter lists. Passing null to a param that isn’t used in a particular unit is a dummy. A better practice for a Dummy is throw exceptions on every method so you can ensure it isn’t invoked by your SUT. * no behavior, never called * no indirect input or outputStubs – verify indirect inputs. Usually are hardcoded or configured in the test to return the same responses regardless of the SUT’s input. Ignores indirect outputSpy – Verify indirect output. Captures indirect output for later verification. May optionally provide indirect inputs.Fakes have working implementations, but do something that makes them not-production code. Like using an in-memory database. They do not offer a control point to the test. May be stateful. * no indirect input, uses indirect output,Mocks are useful for verifying behavior, which is important for indirect outputs. Something like a method that saves changes to the database and returns void has indirect outputs. A classic example is calling to a logger when an exception is thrown. With Mocks, you setup up expectations and then verify they are met. * Can provide indirect input, verifies correctness against expectations.
There are lots of frameworks with lots of differences. You’ll need to find the one that works for you and your team.I’ve been using Moq for a couple of years. A couple of the reasons I like it: Type safe, lambda syntax, no record/replay paradigm. Those features have become a lot less unique in the subsequent years, but I’ve stuck with Moq because I know it and it hasn’t given me any problems.Moq (pronounced “mock” or “mock-you”) is a test doubling framework, and it can be used to create/configure all 4 test doubles we just talked about.This, along with my tendency to refer to all doubles as mocks, creates some confusion. If you’re not clear on it or I misspeak, let me know and I’ll clarify.
Classic trends towards stubs/fakes/dummies because they use state verification. Doesn’t care about implementation or behaviors, only final state.Mockist testing tends to mock everything. Pros/consClassic fakes/stubs take time to setup, and are reused by a lot of tests. Thus bugs in the fake/stub implementation can be tough to track down, far reaching, and mask real production bugs.Mocks can get complicated to setup properly, and are typically done so per-test. They are also inherently more implementation driven, which couples them more tightly to the code they test.