Your SlideShare is downloading. ×
Introduction to Visual Studio Team System – Test Edition
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Saving this for later?

Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

Text the download link to your phone

Standard text messaging rates apply

Introduction to Visual Studio Team System – Test Edition

2,364
views

Published on


0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
2,364
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
105
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • This is a manual test case template tailored to fit our needs
  • does not run Client-Side scripting
  • Record a simple Web Test
  • Record Result-Time and performance data to be used in Load Test
  • This error message is indicating that the web test tried to extract a hidden field called VIEWSTATE from the previous response but was not able to extract it. The current request tried to use the value, but since it was not there, you received this error. So why is the previous request not flagged as an error? The reason is that the previous request was probably redirected to an error page but still returned an http status of 200 OK. So although it did not go to the same page as it did when recording, it was marked as success because of the 200 OK. The built in Extract Hidden Fields rule does not know which fields it is extracting. It just extracts all hidden fields on the page and puts them in the context In the web test editor find the request that has the RequestFailed error in its response. 1. Then look at the form post parameters for this request to see what the hidden fields are named. For example $HIDDEN1.__VIEWSTATE. The thing you need to find is the number value after $HIDDEN. So in this case the number 1. 2. Once you find this, go to the previous request in your test. If it has an Extract Hidden Fields extraction rule and the extraction rule’s value for Context Parameter Name is 1, then you found the request which needs to be fixed. If you did not find an extraction rule or the value for the context parameter name does not match the number after hidden, then go up to the previous request. Continue this process until you find your match. The value for Context Parameter Name can be found by clicking on the extraction rule and then looking at the property browser.
  • I recorded my web test and now I am playing the web test back. The first time I played the web test back it was successful. However, when I come in the next morning and play back the test it fails with the 500 error you see below. Why would it playback successful right after recording and fail the next morning? The answer to this is that session variables typically stay alive for a certain amount of time. It can be anywhere from 5 to 30 minutes or longer. So even though your test has hardcoded values, it works during the playback right after the recording because the values are still active. By the time you play it back the following morning, your session has timed out and the values are no longer valid and your requests will start to fail. Go back to the playback UI and click on the first request in your web test. A lot of times, the session type variables will appear in the response of the first request. Then click on the response tab. Select the entire response and paste it an editor.(notepad, VS, etc.) Search for the name of the parameter you think may be dynamic. In my example this is the ControlID parameter. If you found it, move on to the next step. If not, click on the next request and go back to step b. Now look to see if the value of the parameter is different than the value in your web test. If it is, then this is a dynamic parameter that needs to be correlated. Here is an image of my response page. As you can see the ControlID parameter is different than the one in my test. Since the values are different, we need to extract the new value and bind the extracted value to the hard coded parameter.  The first step is to figure out an extraction rule which can be used to pull the new value out. You can usually use an extract text rule. I am going to use an extract text rule with the following values: Context Parameter Name = ControlID, Ends With = &, HTMLDecode = True, Ignore Case = False, Index = 0, Required = True, Starts With = ControlID =, Use Regular Expression = false. Now the dynamic value will be extracted and added to the test context with a name of ControlID. So we need to change the hard coded value in the failed request, to use this context parameter. Go back to the failed request and click on the dynamic parameter. In my example this is the ControlID parameter. Then bring up the property grid and click on the value property. This needs to be changed to pull the ControlID value from the test context. To have a value pulled from the context you use the following notation, {{ParameterName}}. I named the parameter ControlID, so I would set this value to {{ControlID}}.
  • Transcript

    • 1. Testing Best Practices with VSTS Shahidul Islam, MPH QA Manager Emerging Health Information Technology 12/4/08
    • 2. Agenda
      • Visual Studio Test Edition for Software Testers (VSTEST)
        • Manual Tests
        • Web Tests
        • Load Tests
        • TFS Work Item Tracking
        • TFS Reporting
    • 3. Sources for Test Cases by Test Type VSTS Web Tests
    • 4. Test Plan
      • Documents project-specific strategy for all test types.
    • 5. VSTEST
      • Visual Studio Test Edition for Software Testers
      • Author Tests
      • Run Tests
      • Publish and Share the results
      • Reporting with TFS
    • 6. VSTEST Basics
      • Test Projects are Visual Studio Projects
        • Compile/deploy like any assembly in .NET
        • Visual Studio options under “Test Tools”
        • The solution should be stored within source control (like any other piece of code)
      • Tool set integrated in VS allows for easy adoption from developers.
      • Testers are more integrated with the development process
    • 7. Requirements Management
    • 8. Manual Tests
    • 9. VSTS Manual Tests—Excel
      • File>Properties
      • Custom
        • TestType: “Manual Test”
        • TestID: “Get a GUID from VSTS>Tools>Create a GUID”
        • Save As .mht file
    • 10. Web Testing
      • Uses a recorder for test creation:
        • Records web requests using Internet Explorer
        • Analyzes the HTML traffic– each response can be validated for expected behavior
        • Ordered series of HTTP requests against a target web applications
      • Integrates with Visual Studio
      • Available only via VSTS Team Edition for Software Testers
      • Runs with Test Manager
      • Integrated with Team Build
    • 11. Web Testing
      • Web Tests are of two types:
        • Recorded Tests
          • Launch the browser
          • Record session steps
          • Playback recorded session
          • Convertible to (Coded Tests) C# or VB code to handle complex scenarios and flow control
        • Coded Tests
          • Scripted
          • Better, adapted from recorded tests
          • Playback
      • Supports: ASP.NET, HTTP based, Web Page, Web Services
      • Compatible with HTTPS
    • 12.
      • Demo then proceed to slide 43
      Web Test Demo
    • 13. Recording a Web Test
      • Record a browser session using the Web Test Recorder
      • Analyze the http requests being recorded
      • Can pause in between requests
      • Recording can be a good baseline for coded Web tests
    • 14. Recording a Web Test
      • Steps:
        • Test..New Test
        • Choose Web Test
        • Choose Default Visual C# test project
        • Create
      • A new Internet Explorer instance opens up
    • 15. Recording a Web Test
      • Two windows open:
        • A new Internet Explorer instance
        • Web Test Recorder on the left pane of the IE instance
      • Each request or page navigated is displayed
      • Options available to stop or pause recording
      • Can add comments during recording
    • 16. Recording a Web Test
      • Once stopped, recorded test can be run from Test Window
    • 17. Editing a Web Test
      • Web Test Editor available once recording is stopped
      • Features:
        • Toolbar for basic commands
        • Expandable Request Tree view of requests in the execution order
          • Can expand to see Query String parameters or Form Post parameters
        • Properties Window
          • Properties and Settings of the Web Test
    • 18. Editing a Web Test
      • Adding a New Request:
        • Right click on request to add new request
      • Clean up unnecessary recorded steps
        • Remove extraneous or wrong user actions
      • Modify the order of requests
        • Be conscious of dependent requests
    • 19. Editing A Web Test: Properties
      • Properties:
        • Description: Text for describing the test
        • Pre-authenticate:
          • True: Every request includes authentication header (default)
          • False: Authentication header sent when requested
        • User Name and Password: Values set using the Set Credentials button on Web Test Editor for sites using
          • Basic authentication or
          • Integrated Windows authentication
          • Credentials can be edited directly on properties window or
          • Can bind to data source for multiple runs with different credentials – (recommended)
    • 20. Editing A Web Test: Requests
      • Requests: HTTP request steps which are executed in given order and whose response is processed
      • Request Tree in Web Test Editor consolidates child requests to be executed in order for main request
      • Properties of a Request:
        • Cache Control
        • Encoding
        • Follow Redirects
        • Method
        • Parse Dependent Request
        • Record Results
        • Response Time Goal
        • Think Time
        • Timeout
        • URL
        • Version
    • 21. Editing A Web Test: Requests
      • Cache Control: True/False directive to simulate browser caching
      • Encoding: Text format for requests - usually UTF-8 for HTTP traffic
      • Follow Redirects: True/False to honor status codes 300-307 (redirect) to final page or stop and treat the redirect as final page
      • Method: GET or POST
      • Parse Dependent Request: True/False to process dependent requests such as loading image files. True will get standard behavior (load images)
    • 22. Editing A Web Test: Requests
      • Record Results: Used only with Load Testing to determine if timing and performance data need to be included
      • Response Time Goal: Used with Load Test reporting to record how long the request took to process
      • Think Time: Delay (in seconds) to pause on page before processing next request
      • Timeout: Maximum time (in seconds) to wait for response before timing out and logging a failure
      • URL: URL of the request
      • Version: Versions of HTTP to use – default 1.1
    • 23. Editing a Web Test: Request Sub Items
      • Request Sub-items: Simple requests can be enhanced with sub-items like validation rules etc.
        • Dependent Requests:
          • A normal HTTP request which has a parent request
          • Example: Load image on a page
          • Handled by Parse Dependent Requests property
          • Can have own dependent requests
          • Can be added explicitly – example, extraction or validate rule
    • 24. Editing a Web Test: Request Sub Items
      • Query String Parameters:
        • Name=Value pairs set in the Property window
        • Show Separate Request Results:
          • Used to group results in Load Testing
          • Result data is grouped if False (default); if True, displays separate grouping for data (response time etc)
        • URL Encode: Determines if Name/Value will be URL encoded
      • Form Post Parameters: If the request method is POST, enables two form POST parameters:
        • Common Form Post Parameter
        • File Upload Parameter
      • Headers: Name=Value pair to be included in HTTP header. Ex: Referer=www.microsoft.com
      • Validation Rules: Add validation rules to verify if response from the request is expected one
      • Extraction Rules: Capture data from response
    • 25. Editing a Web Test : Transactions
      • Allow grouping of requests as a unit to
        • Document
        • Move/Copy
        • Capture timing information
      • To insert:
        • Right click at the request where the transaction is to be inserted
        • Complete the Add Transaction dialog
      • Requests can be moved between transactions using drag and drop.
    • 26. Editing a Web Test: Comments, Context Parameters
      • Comments: Tests can be documented with comments
      • Context Parameters:
        • Replace static values with dynamic values for property, value or parameter
        • Variables expand to real values at runtime to make the test flexible and dynamic
        • Usually done through Parameterizing Web Servers
    • 27. Editing a Web Test: Parameterizing Web Servers
      • Select web test, and right-click
      • Select Parameterize Web Servers…
        • Allows changing URLs from static to context-parameterized variables to re-target the test
        • Parameterize Web Servers dialog lists web servers used with a default Context Parameter Name
        • Can be changed using the Change button to change
          • Web Server
          • Local ASP.NET Development Web Server
    • 28. Editing a Web Test: Parameterizing Web Servers
      • After changing, the Context Parameters node appears in the web test
      • Values can be edited via right click on Properties of newly added
        • parametername=value
      • Context Parameter can then be used in requests using
        • {{..}} notation
      • Example: {{WebServer6}}/live/search/search.xml
    • 29. Editing Web Tests: Extraction Rules
      • Data from POST/GET requests can be extracted for
        • Subsequent requests
        • Providing debugging information
      • Extraction rules verify that a Web application is working correctly by
        • extracting data from the responses to Web requests
        • storing results in the test context as name value pairs.
      • Extraction rules can extract
        • form fields
        • Text
        • Attributes
        • Headers
        • regular expressions
        • hidden fields.
    • 30. Editing Web Tests: Extraction Rules
      • To Extract, right-click on a request and choose Add Extraction Rule
      • Extraction rules define what data is to be extracted from an HTTP response and store the data in context parameters
      • Dynamic data generated via requests can be passed down to later requests.
    • 31. Editing Web Tests: Extraction Rules
    • 32. Editing Web Tests: Extraction Rules
      • To add Extraction Rule:
        • Right-click on request
        • Choose Add Extraction Rule
        • Modify/add parameters
        • Set Context Parameter to a unique name
    • 33. Annoying Error Messages!!
      • RequestFailed: Context parameter '$HIDDEN1.__VIEWSTATE' not found in test context ”.
      • Add Extract Hidden Fields
    • 34. Annoying Error Messages!!
      • Web Test ran flawlessly right after recording but Next Morning?
      • 500 Server Error
      • Search for parameters you think may be dynamic (Session Variable)
      • Add Extract Text Rule
      • Example: Context Parameter Name = ControlID, Ends With = &, HTMLDecode = True, Ignore Case = False, Index = 0, Required = True, Starts With = ControlID =, Use Regular Expression = false.
    • 35. Editing Web Tests: Validation Rules
      • Validation rules validate existence of
        • Text
        • tags,
        • or attributes
      • on the page returned by a request.
      • Validation rules also verify the amount of time it takes a request to finish, and the existence of form fields and their values.
    • 36. Editing Web Tests: Validation Rules
    • 37. Editing Web Tests: Validation Rules
      • To add a validation rule:
        • Select a request
        • Right-click
        • Add Validation rules in dialog
      • Validation Level:
        • Used only during a load test
        • Higher validation levels effect speed, hence slower tests
        • Does not mean priority
        • Determines when the validation rule is used in a load test.
        • Setting to High means that the rule is executed only when the load test validation level is set to high.
    • 38. Editing Web Tests: Validation Rules
      • Validation Levels:
        • Used only during a load test
        • Higher validation levels means slower tests
        • Does not mean priority
        • Determines when the validation rule is used in a load test.
        • Setting to High means that the rule is executed only when the load test validation level is set to high.
    • 39. Running Web Tests
      • Settings can be configured from .testrunconfig file
      • Choose Web Test from dialog:
        • Number of iterations
        • Browser Type
        • Network Type
        • Simulate Think Times
      • These settings are overridden when run under a Load Test
    • 40. Running Web Tests:
      • From the tool bar:
        • Run Test
      • From the command line
        • Invoke MSTEST
    • 41. Interpreting Web Test Results
      • Results Viewer:
        • Results Panel:
          • Request: The full URL requested
          • HTTP Status: HTTP status returned from the server – typically 200 OK
          • Response Time: Time web server took to respond
          • Size: Size of response
    • 42. Interpreting Web Test Results
      • Details Panel:
        • Web Browser Tab: Response rendered as browser page
        • Request: Contents of the selected request
        • Response: Contents of the response to selected request
        • Context: Current context parameters, useful for debugging extraction rules
        • Details: Results of extraction and validation rules for selected request.
    • 43. Coded Web Tests
      • Written in C# or Visual Basic
      • Incorporates flow of control as opposed to sequential recorded tests
      • Use Recorded tests for baseline and then convert:
        • Select web test
        • Right-click
        • Generate Code
    • 44. Why Coded Web Tests?
      • Can read expected results from a data source
      • Flexibility of conditionals and looping
      • Copy set of request from one test to another
      • Factor out sections of requests and other operation into methods that any test can call
    • 45. AJAX in Web Tests
      • VSTS 2008 now records AJAX requests
      • Freeware Fiddler need not be used, although it’s still useful for diagnosing
        • See: http:// www.fiddlertool.com /fiddler/
    • 46. Test Case Management
      • Test List Editor
        • New Test List per release/project/cycle
        • Properties>Associated Work Items
      • Run Test
        • Run one or multiple tests
        • Debug tests
    • 47. Test Results Window
      • Run/Debug
      • Pause/Stop Run
      • Import/Export Results
      • Group results by
    • 48. LOAD TESTS
      • Step 1: Identify Objectives.
      • Step 2: Identify Key Scenarios.
      • Step 3: Identify Workload.
      • Step 4: Identify Metrics.
      • Step 5: Create Test Cases.
      • Step 6: Simulate Load.
      • Step 7: Analyze the Results.
    • 49. Understanding Load Test
      • Primary goal of a Load Test is to simulate many users accessing server at the same time
      • By adding Web Tests to a Load Test you can,
        • Simulate multiple users opening simultaneous connections to a server
        • Make multiple HTTP requests
        • Set other properties that apply to individual Web Tests
    • 50. Understanding Load Test Continues..
      • Load Tests are used in several different types of testing:
      • Smoke: How your application performs under light loads for short duration
      • Stress: To determine if the application will run successfully for a sustained duration under heavy load
      • Performance: To determine how responsive your application is
      • Capacity: To determine how many users and/or transactions a given system will support and still meet performance goals
    • 51. Identify Objectives
      • Write down the performance objectives for your application.
        • Response time . For example, the product catalog must be displayed in less than 3 seconds.
        • Throughput . For example, the system must support 100 requests per second.
        • Resource utilization . A frequently overlooked aspect is how much resource your application is consuming, in terms of CPU, memory, disk I/O, and network I/O.
        • Maximum User Load . Determining how many maximum users can run on a specific hardware configuration
    • 52. Identify Key Scenarios
      • What are scenarios?
        • Scenarios are anticipated user paths that generally incorporate multiple application activities.
      • How do you identify scenarios?
        • Key scenarios are those for which you have specific performance goals or those that have a significant performance impact.
      • For example:
        • Login to the site.
        • Browse albums.
        • Register to the site.
    • 53. Identify the workload
      • Identify the distribution
        • IIS Logs (Exist).
        • Research (Not exist).
      • Identify the user loads
        • What are the max expected
        • concurrent users?
    • 54. Identify Metrics
      • Metrics provide information about how close your application is to your performance objectives.
      • Metrics help you identify problem areas and bottlenecks.
      • Network-specific metrics.
      • System-related metrics.
      • Platform-specific metrics.
      • Application-specific metrics.
    • 55. Create Load Test Cases
      • What is a Load test case?
        • An group of activities involved in a scenario.
        • For example: Place an Order.
      • The test cases are created based on the scenarios and the profile mix identified in the previous steps.
      • Each test case should mention the expected results in such a way that each test case can be marked as a pass or fail after execution.
    • 56. Performance Test Cases
      • Load Test Case example:
      • Expected Results example:
    • 57. VS Performance Test
      • Load patterns and Real world test analogy:
        • Constant pattern  Load test, Stress test
        • Step pattern  Compatibility test, Performance test, Benchmarks
        • Goal based pattern  Stress test , Capacity test
    • 58. Load Test Life Cycle http://blogs.vertigosoftware.com/teamsystem/archive/2006/02/06/VSTS_Load_Testing_Lab_Setup.aspx
    • 59. Demo
      • Add a blank web test
      • Call web test from another web test (Login and LinkToLos)
      • Data binding-csv-LoginPage
      • Run WebTest once then edit run settings to run per row
      • Validation rule passing
      • Show Load Test setup and run
      • Analyze LoadTest Result
    • 60. Control Load Modeling
      • Load Tests now offer additional load-modeling options that enable us to create load tests that more accurately model the expected real-world usage of an application:
      • Number of tests run
        • Determines which test is run when a virtual user starts a iteration
        • Follow this model when you are basing test mix on transaction percentages in an IIS log or production data
    • 61. Control Load Modeling continues..
      • The amount of time spent on each test
        • Determines the percentage of virtual users who will run a particular test
        • Follow this model when you are basing the test mix on the percentage of users running particular test
      • The pace/speed at which the users run the tests
        • Each test is run the specified number of times per user per hour
        • Follow this model when you want virtual users to run tests at a certain pace throughout the load test
    • 62. How should the test mix be modeled
    • 63. Improved Load Test Analyzer Views
      • New Summary View with key indicators in a single page
      • Four new built-in graphs display key information at the same time
      • Easily accessed table view
    • 64. Load Tests Summary
      • Test Run Information
        • Name, Description, Start/End times, Controller
      • Overall Results
        • Number of request/sec
        • Number of failed requests
        • Average response time
        • Average page time
    • 65. Load Test Summary Continues…
      • Key Statistics: Top 5 Slowest Pages
        • URL (Can click the link to see details)
        • and average page load time
      • Key Statistics: Top 5 Slowest Tests
        • Test name (Can click the link to see details)
        • Average test time
      • Key Statistics: Top 5 Slowest SQL operation
        • Name of the operation (Can click the link to see details)
        • Duration in Microseconds
    • 66. Load Tests Summary Continues..
      • Test Results
        • List of all tests and scenarios
        • Number of time each scenario ran
        • Number of times it failed
        • Average test time
    • 67. Load Tests Summary Continues..
      • Page Results
        • List of all web pages in the load test
        • Total count of requests made for the web page
        • Average page time
    • 68. Load Test Summary Continues..
      • Transaction Results
        • NOT A DATABASE TRANSACTION
        • Web Test Transaction
        • List of all the transactions in the Load Test
        • Response Time
        • Elapsed Time
        • Total Counts
    • 69. Load Test Summary Continues…
      • Controller and Agent Resources
        • Contains list of computers that are used to run the test
        • % processor time
        • Available memory at the end of test run
    • 70. Load Test Summary Continues…
      • System Under Test Resources
        • List of computers that are the set of target computers for which load is being generated
        • Includes any computer from which you collect counter sets other than Agent or Controller.
    • 71. Load Test Summary Continues…
      • Errors
        • List of all errors
          • Type i.e. HttpEerror
          • Sub Type i.e. LoadTestException
        • Error count
          • Number of each error type
        • Error message i.e. 404-NotFound
    • 72.  
    • 73. Graphs View
    • 74. Table View
      • Errors-Displays a list of errors that occurred during the load test run
      • Pages-Displays a list of pages accessed during a load test run
      • Requests-Displays details for individual requests issued during a load test
      • SQL Trace-Displays the results of SQL tracing only if SQL tracing was enabled with right connection string
      • Tests-Displays details for individual tests run during a load test
      • Thresholds-Displays a list of threshold rule violations that occurred during a load
      • Transactions-Displays a list of transactions that occurred during a load test run
    • 75. Load Test Result Repository Management
      • Information gathered during a Load Test run is stored in the Result Repository
      • Repository contains performance counter data and error data
      • You can manage your load result from the Load test editor
        • Open
        • Import
        • Export
        • Remove
        • You do not have to go the dbase to clean up the repository
    • 76. Defects Tracking Using TFS
    • 77. Defects Tracking Using TFS Continues..
    • 78. Reporting Using TFS—Remaining Work Items
    • 79. Reporting Using TFS—Bug Rates
    • 80. References
      • http://blogs.msdn.com/slumley/pages/how-to-debug-a-web-test.aspx
      • http://blogs.msdn.com/slumley/pages/web-test-correlation-helper-feature-in-orcas.aspx
      • http://msdn.microsoft.com/en-us/library/aa730850(vs.80).aspx