These are some of the comments that we have heard from our customers. Are any of these being said by your testers?Broken down another way, can you answer these affirmatively?Do you know consistently when your software will ship?Are the defects created by your tester truly effective?No? <click to next>
Does this sound like a wish list? How much money is lost in the time that it takes to track down a bug?How much money is lost in the time it takes to isolate the differences between your development in test environments?
It is also important to understand where most testing happens in the spectrum of general testing to the more technical specialist testing.The Generalist Testers are usually professional testers with no coding background. Often these testers are experts in the business process or tool that is being developed. On the opposite side of the spectrum is the Specialist. This is a tester with strong coding skills.A fun side note: Microsoft’s testers are usually converted developers and tend to be on the specialist side of the graph.Black-box testing is a method of testing software that tests the functionality of an application as opposed to its internal structures or workings (see white-box testing). Specific knowledge of the application's code/internal structure and programming knowledge in general is not required. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and design to derive test cases. These tests can be functional or non-functional, though usually functional. The test designer selects valid and invalid inputs and determines the correct output. There is no knowledge of the test object's internal structure.White-box testing (a.k.a. clear box testing, glass box testing, transparent box testing, or structural testing) is a method of testing software that tests internal structures or workings of an application as opposed to its functionality (black-box testing). An internal perspective of the system, as well as programming skills, are required and used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. It is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT). While white-box testing can be applied at the unit, integration and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements. White-box test design techniques include: Control flow testing Data flow testing Branch testing Path testingAPI testing (application programming interface) – is a specific type of White Box testing of the application focusing on public and private APIs<Question to Audience>Looking at this spectrum, where does most testing happen today? <collect answers and click>Where do most testing tools target today? <collect answers and click>
Describe the pieces as it applies to the spectrum as the slide builds out.
Currently about 70% of functional testing is done manually—that is, a software tester follows a script to execute a series of steps to verify the outcome of a test. Microsoft Test Manager 2010 is an example of a tool that someone doing manual testing would use to be more productive. Imagine if you could still get the benefits of functional testing without the overhead and resource cost of doing in manually.Consider a regression test—a test designed to identify if a bug fix begins to fail. With a regression test (and even other functional tests) you are repeatedly testing something that was known to work at one point. These too are typically done manually. This is a massive resource hit. Software testers spend countless hours testing functionality that works, solely to ensure it still works. Imagine how much more productive they could be if they could focus their efforts on creating and running new test cases that covered parts of the system not currently tested instead of spending their time testing the same thing, over and over again.
Here are two examples <Build> of the dashboard you get when running on top of one of the full versions of SharePoint Server. First, we have the Burndown dashboard showing project progress. <Build> Second, we have the Quality dashboard. So what does the quality dashboard tell us? <next>
As you can see the Quality dashboard has four main graphs. <Build>First you can see if your test team is making progress on running test plans. <Build>Second, you can see how build are doing over time. What’s the trend like? Are you having lots of success or failure? <Build>Third, what’s your bug trend like. Are you closing out bugs or are you stagnate. Or is the velocity of your bug filing far exceeding your team’s ability to fix, test, and close out bugs? <Build>Finally, are you seeing a bad trend related to bug reactivations—bugs that were closed reopened by test as not fixed? All of this information is there for you in a quick, heads up dashboard format.
Transcript of "Testing SharePoint solutions overview"
Testing SharePoint SolutionsOverview<br />Ervin Loh<br />Visual Studio ALM MVP<br />K365Labs SdnBhd<br />Light Up SharePoint<br />
Presenter<br />Ervin Loh<br />Profile<br />Ervin Loh is currently the Application Lifecycle Management Program Manager at K365Labs Sdn Bhd. He works on a variety of Application Lifecycle Management, Software Configuration Management, and Lab Management products. He is also active in the IT community space by contributing contents to Microsoft Malaysia's START.NET and Ultimate program workshops and talks in conferences, events and user groups. <br />
ASP.NET Developers, SharePoint Developers<br />Target Audience<br />
An Unfortunately Common Scenario<br />Developer writes code<br />Developer makes sure the code compiles<br />Developer checks in code<br />Repeat x15 developers for 4 weeks<br />Developer lead does a build of all code from developers<br />Application is installed in test environment<br />Tester tries to test the application<br />Application doesn’t work<br />Developer blames tester<br />
The Demonstrations Scenario<br />The Configurations<br />A fully configured Visual Studio 2010, Team Foundation Server 2010, and SharePoint Server 2010 environment.<br />The Tasks<br />A lot of tests.<br />
Topics<br />Overviewofthetestingtools in Visual Studio 2010<br />Testing SharePoint projectswith Visual Studio 2010<br />10<br />
Have you heard any of these?<br />“my testers are spending too long testing the same thing”<br />“tooling is expensive (time, licenses, people)”<br />“the developers say the defects are useless”<br />“when is my software ready to ship?”<br />“developers and testers work in silos and don’t communicate/speak the same language”<br />“we have a changed requirement, what do I need to test?”<br />
What if you could…<br />reduce the time it takes to determine the root cause of a bug<br />reduce the time it takes to replicate a bug uncovered by user actions<br />reduce the time it takes to isolate differences between the test and production environment<br />enable users to easily run acceptance tests and track their results<br />reduce the time it takes to verify the status of a reported bug fix<br />
Where does testing happen?<br />70% of testing happens here<br />Majority of test tools target here<br />Black Box Testing<br />White Box Testing<br />API Testing<br />
Visual Studio 2010 Test Capabilities<br />Load Test <br />Web Test <br />Coded UI Test <br />Microsoft Test Runner<br />Unit Testing <br />Test Case Management<br />Lab Management<br />Data Collectors (Historical Debugging, Test Impact, Metrics)<br />Team Foundation Server<br />Reporting<br />
Why is fixing bugs difficult?<br />Insufficient information to reproduce<br />Not every step is documented<br />Can’t see exactly how the bug was triggered<br />Different environments<br />OS, service packs, installed software,…<br />Creating a clean environment takes time<br />
Microsoft Test Manager 2010<br />Plan, Manage and Execute (manual) tests from one place<br />Create test cases<br />Build test suites<br />Define configurations<br />Run test cases<br />File bugs into TFS<br />Including captured data<br />Automate testing<br />Integrated with TFS<br />
VS Agents: Diagnostic Data Adapters<br />System Info<br />IntelliTrace™<br />Video Capture<br />Steps Performed<br />System Info<br />IntelliTrace™<br />Visual Studio Agents 2010<br />Separate download<br />Test Controller & Test Agents<br />Capture data while testing<br />Save data with bug report<br />Easy for developer to reproduce bug<br />
Lab Manager 2010<br />Setup, Teardown, and Restore virtual environments<br />Maintain known state<br />Build automation<br />Build deployment<br />Test execution<br />Network isolation<br />Run multiple copies<br />Connect from VS2010<br />Linked to bug form<br />
70%<br />of all functional testing is still done manually.<br />
DemoFunctional Testing with Microsoft Test Manager<br />
Automated Testing for SharePoint<br /><ul><li>Web Performance Testing
Unit Testing</li></li></ul><li>Web Performance Testing<br />Simulate a single use case<br />Internet Explorer Web Testing toolbar<br />Records the user’s web requests<br />Important: This is not a UI test<br />Validation Rules<br />Extraction Rules<br />Parameterized Tests<br />Data-Driven Tests<br />
Load Testing<br />Stress-testing: simulate many users at once<br />Need multiple servers to create substantial load<br />Multiple agents generate load on the application<br />Single controller coordinates the agents and captures data<br />PerfMon-style output<br />
Coded UI Testing<br />Simulate a single use case<br />Uses a recording toolbar, like Web Performance test<br />Records the user’s interaction with the UI<br />Generates code to reproduce that interaction<br />Code can be edited and customized at will<br />Has some issues with SharePoint<br />e.g. no support for datasheets<br />Visual Studio Feature Pack 2 adds support for Silverlight<br />
Unit Testing<br />Using built-in Unit Test with SharePoint has problems:<br />Issues with 32-bit vs 64-bit, and .NET 3.5 vs .NET 4.0<br />Need to stub/mock SharePoint API<br />e.g. SPSite, SPWeb, SPList<br />Additional tools enable unit testing for SharePoint projects<br />Microsoft Research: Pex & Moles<br />Integrates with VS2010, free to MSDN subscribers<br />http://research.microsoft.com/en-us/projects/pex/pexsharepoint.pdf<br />Third party: NUnit, TypeMock<br />
DemoAutomated Testing with Microsoft Visual Studio<br />
Are we ready to ship?<br />Are we making progress on running test plans?<br />How are our builds doing over time?<br />Are we fixing bugs?<br />What’s the quality of our bug fixes?<br />Ready to Ship?<br />