Performance Testing

October Speaking Engagement – Indy TFS

I’ll be presenting Web performance and load testing in Visual Studio 2010 to the Indy TFS user group on October 5, 2011.  In this talk we’ll explore some of the basic test management capabilities in Visual Studio 2010 before diving in to building and executing both Web performance and load tests.  Some areas we’ll examine include:

  • Test recording tests
  • Parameterizing tests
  • Extraction rules
  • Validation rules
  • Data binding
  • Load test scenarios

Location

Microsoft Office
500 East 96th St
Suite 460
Indianapolis, IN 46240
[Map]

Doors open at 5:30 PM with the meeting starting at 6:00.  Pizza and drinks will be provided.

Register at https://www.clicktoattend.com/invitation.aspx?code=157231.

I hope to see you there!

Web Performance & Load Testing in Visual Studio 2010

Visual Studio 2010 Premium and Ultimate include tools for authoring and executing a variety of test types.  Out of the box we get support for several types of tests:

Manual tests require a person to manually perform steps according to a script.  Note that manual tests are actually TFS Work Items rather than code files within a test project.

Unit tests are low-level coded tests meant to exercise small areas of code (units) to ensure that code is functioning as the developer expects.

Database unit tests are intended to exercise database objects such as triggers, functions, and stored procedures.

Coded UI tests programmatically interact with the user interface components of an application to ensure proper functionality.

Web performance tests are intended to verify the functionality or performance of Web applications.

Load tests verify that an application will still perform adequately while under stress.

Generic tests allow calling tests written for another testing solution such as NUnit.

Ordered tests are a special kind of test that contain other tests that must be executed in a specific sequence.

Web performance and load tests are two of the more interesting types and are the focus of this post.  Before diving into them though we should examine how Visual Studio manages tests.

Managing Tests

This section provides a high-level overview of Visual Studio’s test management capabilities.  Test projects, the test view window, test categories, running tests, and working with test results will all be introduced.

Test Projects

One thing that all test types have in common is that they must be part of a test project.  Test projects are language (C#, VB) specific and created by selecting the corresponding “Test Project” template from the Add New Project dialog box.

There are no restrictions on the mix of tests contained within a test project.  It is perfectly acceptable to have unit tests, Web performance tests, and generic tests within a single test project.  It is also acceptable for a solution to have multiple test projects.

Note: Adding a test project to a solution will also a few items to the solution’s Solution Items folder.  Two of the items are .testsettings files used to control the behavior of tests under different execution scenarios.  The settings used during a test run is determined by the item selected in the Test/Select Active Test Settings menu.  The third item is a .vsmdi file used to manage test lists.

The Test Project template will create an empty unit test by default.  This file can safely be deleted.  If you want to customize what is included in new test projects change the settings in Test Tools/Test Project section of the Visual Studio options.

Adding new tests to a test project is just like adding new classes to any of the standard project types.  Items can be added from either the Project menu or the test project’s context menu.  Both methods allow adding each of the test types described above (except manual) to the test project.

Test View Window

The test view window is the primary interface for managing tests within a solution.  From the test view window we can see a listing of all tests in a solution, filter the list to see only tests within a specific project or category, edit test properties, and select tests to run.

The test view window can be opened by selecting Test View from the Test/Windows menu.

Test Categories

As the number of tests in a solution grows it may be necessary to organize them.  Test categories are a convenient way to add some order to the chaos.  To categorize a test, right click on it and select properties.  Locate the “Test Categories” item and click the ellipses button to open the Test Category editor.  Once inside the Test Category editor categories can be selected or new categories can be created and selected.

In addition to helping organize tests categories can also be used to filter down which tests are included in a given run.  The test view window includes a select box named “Filter Column” that includes an option for “Test Categories.”  Selecting this option and entering a category name in the Filter Text text box is an easy way to display only the tests assigned to a particular category.

Note: The test view window can also be used to manage test lists.  In Visual Studio 2010 test lists have been deprecated and are only used by TFS check-in policies.

Running Tests

The test view window makes selecting tests to run very easy.  To select a single test just click it.  Selecting tests multiple tests is done using the standard ctrl + click and shift + click.  By default every test in the solution is included in the test view window.  The list can be grouped and filtered on a variety of criteria including project and category.

Grouping is a matter of selecting the desired field from the Group By select box.  Filtering is only slightly more involved.  To apply a filter optionally select a field from the Filter Column select box and enter criteria in the Filter Text text box.

Once the desired tests are displayed and selected click the Run Selection button from the test view window’s toolbar.  When a test run starts the test results window will be displayed.

Note: By no means is the test view window the only way to start a test run.  Most test types provide their own run button in their editor.  Unit tests can even be run individually by test method, test class, or namespace.

Test Results Window

Test run status is shown in the test results window.  This window is displayed as soon as a test run is started and is updated as the run progresses.

The test results window displays both summary and test specific information.  The summary line shows overall status of the run along with the number of tests that passed.  Each of the tests included in the test run appears as a row in the results window.  Individual test status is identified graphically as pending, in progress, failed, inconclusive, or aborted in the result column.  Double clicking any test will display the details of the test result.

Web Performance Tests

Web performance tests use manually written or recorded scripts to verify that specific functionality of a Web application is working and performing correctly.  For instance, we can record a test script to ensure that a user can log in to the application.  Web performance tests are also composable so over time we can build a library of reusable scripts to combine in different ways to test different, but related activities.

In this section we’ll walk through the process of creating Web performance tests.  We’ll also examine parameterization, changing request properties, comments, extraction rules, validation rules, data binding, and how to call other Web performance tests.

Creating Web Performance Tests

There are several ways to create Web performance tests:

  1. Recording browser interactions
  2. Manually building tests using the Web Test Editor
  3. Writing code to create a coded Web performance test

By far the easiest method is recording browser interactions.  Recorded scripts can be customized easily.  They can also be used to generate coded Web performance tests.  It is generally recommended to start with a recorded test therefore we will not be discussing coded Web performance tests here.

There are two main ways to record Web performance tests.  Visual Studio includes a built-in Web test recorder but we can also use Fiddler to record sessions and export them to a Visual Studio Web test.

Recording in Visual Studio is generally easier as it automatically performs many tasks such as extracting dynamic parameters, setting up validation rules, and grouping dependency request whereas Fiddler offers more control of what is recorded.  Regardless of which method is used, there will likely be some clean-up work when recording is complete.

Note: Recorded or manually built (non-coded) Web performance tests are XML files stored with an extension of .webtest.

Recording in Visual Studio

Invoking the Web test recorder in Visual Studio is a matter of adding a new Web performance test to the test project.  Visual Studio will automatically start an instance of Internet Explorer with a docked instance of the Web Test Recorder.

  1. Right click the test project
  2. Select Add –> Web Performance Test.
  3. Manually perform the steps (navigation) to include in the script
  4. When finished, click the stop recording button
  5. Wait for Visual Studio to detect dynamic parameters that can be promoted to Web test parameters and review its findings selecting each value that should be promoted.

Exporting Sessions from Fiddler

A full discussion of Fiddler is beyond the scope of this document but it is important to look at its filtering capability to see how we can restrict what it captures down to what is relevant to our test.

The first option we’ll typically set is to Show only Internet Explorer traffic.  Selecting this option will prevent traffic from other browsers or software making http requests from showing in the sessions list.

Another option that is often useful is to change the value of the select box under Response Type and Size to “Show only HTML.”  This option will prevent the session list from showing any dependency requests for additional resources like images or script files.

Note: Fiddler doesn’t record requests targeting localhost.  There are workarounds but it is generally easier to target localhost by its name and parameterize the server name later.

  1. Verify that any desired filters are set
  2. Manually perform the steps (navigation) to include in the script
  3. Select the appropriate sessions
  4. File –> Export Sessions –> Selected Sessions…
  5. Select “Visual Studio Web Test”
  6. Click “Next”
  7. Browse to the target location, typically the test project folder
  8. Name the file
  9. Deselect all Plugins
  10. Include the newly created .webtest file in the test project

Parameterization

Most of the time a recorded script is only the starting point for constructing a Web performance test.  In their initial form recorded test aren’t very flexible.  Luckily we can use parameters to change that.

Whenever a parameter value is used in a request the name appears in a tokenized format.  For example, if we have a context parameter named username and we wanted to reference it in a request we would use {{username}}.

Context Parameters

Context parameters are essentially variables within a Web performance test.  Each step within a test can read from and write to context parameters making it possible to pass dynamic information such as new IDs from step to step.

Parameterizing Web Servers

When a test is recorded the original server name is embedded in each request.  Although it is common to run tests against localhost it is often necessary to run the tests against a different server.  With the server name embedded in each request we cannot satisfy this need.  Fortunately, Visual Studio provides the “Parameterize Web Servers” operation that will automatically extract the protocol and server name from each request and replace it with a context parameter.

To parameterize the Web server for each request:

  1. Click the “Parameterize Web Servers” toolbar button
  2. Verify the server name(s)
  3. Click OK

When the dialog closes each request will be updated to reflect the context parameter(s).  You should also notice that any new parameters have been added to the test’s Context Parameters folder.

Changing the server is simply a matter of changing the value of the corresponding context parameter using the properties window.  Alternatively, clicking the “Parameterize Web Servers” button again will open the “Parameterize Web Servers” dialog.  From this dialog click the Change button and enter the appropriate URL.

It is possible to parameterize additional parts of the request URL but there is no direct support in Visual Studio.  Should the need arise it is generally easier to manually edit the XML adding the context parameter to the context parameters element and replacing the portion of the URL being parameterized with the context parameter token.

Setting Request Properties

One of the benefits of recording Web performance tests is that the value of most of the properties for each request are generated by the recording process.  For the most part modifying these values after the test is recorded isn’t necessary but occasionally we need to tweak something.

Some common scenarios for when property values might need to be adjusted are:

  • Needing to check response time
  • Needing to adjust think time to closer reflect reality
  • Correcting the Expected Response URL when a redirect is encountered

Note: Think times are intended to simulate the amount of time a real user would spend “thinking” before proceeding to the next step.  Think times are disabled by default but are more useful in load tests.

To access a request’s properties right click the request and select the “Properties” option.

Comments

Comments allow documenting portions of a Web performance test.  They can serve as helpful reminders of what a particular portion of a script is doing.

Extraction Rules

As their name implies, extraction rules are used to extract values from a response.  Context parameters are used to store the extracted value for later reference.

Adding an extraction rule to a request is a matter of selecting the “Add Extraction Rule…” option from a request’s context menu.  There are eight built-in extraction rule types to choose from:

  • Selected Option
  • Tag Inner Text
  • Extract Attribute Value
  • Extract Form Field
  • Extract HTTP Header
  • Extract Regular Expression
  • Extract Text
  • Extract Hidden Field
    You should be aware that extraction rules can have a negative impact on test performance.  To mitigate this problem extraction rules should be used judiciously.

Validation Rules

Validation rules ensure that the application under test is functioning properly by verifying that certain conditions are met.

Adding a validation rule to a request is a matter of selecting the “Add Validation Rule…” option from a request’s context menu.  There are nine built-in validation rule types to choose from:

  • Selected Option
  • Tag Inner Text
  • Response Time Goal
  • Form Field
  • Find Text
  • Maximum Request Time
  • Required Attribute Value
  • Required Tag
  • Response URL

Like extraction rules, validation rules can have a negative impact on performance.  Judicious use of validation rules can alleviate this problem.

Note: Performance degradation due to validation rules typically isn’t an issue when running a stand-alone test but can have a much larger impact on load tests.  To address this problem adjust the lower the test level on the affected load test and the most important validation rules.

Binding Tests to a Data Source

There are times when static values or context parameters are sufficient but many tests can benefit from using some level of data binding to get a bit more variation in the data.  To add a data source to a Web performance test either click the “Add Data Source” toolbar button or select the “Add Data Source…” option from the root node’s context menu and follow the prompts in the New Test Data Source Wizard.  Once the data source is defined it can be referenced by requests.

Data can be retrieved from a data source using one of three methods:

  • Sequential – Iterates over the data set and loops back to the beginning if the test includes more iterations than there are records
  • Random – Randomly selects a record from the data source as long as necessary
  • Unique – Just like sequential but without looping

The steps for binding a data source are dependent upon the type of test object.  In many cases once the item is selected opening the property sheet, clicking the drop down arrow, then navigating to the appropriate source and data table is all it takes.  Should you need to bind to a data source manually enter the tokenized form {{dsn.table#column}}.

Transactions

In the context of a Web performance test a transaction is a group of related requests.  Each of the requests inside the transaction can be tracked as a whole unit rather than individually.

To create a transaction:

  1. Select the “Insert Transaction…” option from a request’s context menu
  2. Enter a test name
  3. Select the first and last items
  4. Click OK

When the dialog closes the selected requests will be grouped under a new transaction node.

Calling Other Web Performance Tests

Parameterization, comments, extraction rules, validation rules, and data binding all directly contribute to making individual Web performance tests valuable but one of the most powerful features is making tests call other tests.  For instance, if we have a variety of tests that each require first logging into an application we can isolate the login steps into a separate .webtest and have each of those tests call it before doing anything else.

Although it’s perfectly acceptable to record the common steps separately it is generally easier to start with a full test that already includes the common steps and extract them using the Web test editor.  To extract a test:

  1. Select the “Extract Web Test…” option from the root node’s context menu
  2. Enter a test name
  3. Select the first and last items
    Comments can be very useful for identifying where one set of steps ends and the next begins
  4. Check the “Extracted test inherits test level properties” box
  5. Uncheck the “Copy test level plug-ins and validation rules” box
  6. Click OK

When the dialog closes the items identified in step 3 above will be removed and a call to the newly created test will have taken their place.

When the extracted test is needed by another test it can be inserted into the test by selecting the “Insert Call to Web Test…” option from any node’s context menu.

Load Tests

Load tests verify that an application will perform under stress by repeatedly executing a series of tests and aggregating the results for reporting and analysis.  Load tests are often constructed from Web performance test scripts but most other test types are also allowed.

Note: Visual Studio 2010 includes support for distributed load tests.  Distributed load tests are not discussed here.

Scenarios

Load tests consist of one or more scenarios.  Scenarios define the conditions for the test including how to use think times, load pattern, test mix model, network mix, and browser mix.

Think Time

One of the first options to set for a load test is how to use think times if at all.  When using think times we can select to use the recorded times or to use a normal distribution based upon the recorded times.  Generally speaking, it is best to use the normal distribution to get a more accurate reflection of user load.

Load tests also have an option for think time between iterations.  Use this option to introduce a delay between each iteration.

Load Pattern

The load pattern defines user load through the course of a test.  The two main load pattern types are constant load and step load.

Note: There is a third pattern, goal based, that adjusts load dynamically until certain performance criteria is met.

As its name implies, a constant load pattern keeps a constant load of a specified number of users throughout the duration of the test.  When one iteration completes another will immediately start to keep the load at the same level.

Step load will gradually add user load to over the course of a test.  Load tests using the step load pattern start will start with the specified number of users gradually adding more users up to a defined maximum.

Test Mix and Test Mix Model

The test mix defines which tests are included in the load test.  Before we can define the test mix we must select a test mix model.  Test mix models are control how frequently each test in the mix will be executed.  There are four options for modeling the test mix.

Based on the total number of tests: Each test in the mix is assigned a percentage that indicates how many times it should be run.  Each virtual user executes each test according to the defined percentages.

Based on the number of virtual users: Each test in the mix is assigned a percentage that indicates how many of the total users should invoke it.

Based on user pace: Each test in the mix is run a specified number of times per user per hour.

Based on sequential test order: Each test in the mix will be executed once per user in the order the tests appear in the mix.

Network Mix

Load tests can also simulate different network speeds according to the network mix.  Available options include LAN, WAN, dial-up, cable, and some mobile phone networks.  Each new virtual user will semi-randomly select a network speed according to the network mix distribution.

Note: The network emulation driver must be installed for options other than LAN.  If the driver is not installed when the first non-LAN option is selected Visual Studio will prompt you to install it.

Browser Mix

When a load test is built from one or more Web performance tests we can also specify a browser mix.  Most popular browsers are available for selection.

Note: If another browser is required a new .browser file can be added to the Program Files\Microsoft Visual Studio 9.0\Common7\IDE\Templates\LoadTest\Browsers folder. (http://social.msdn.microsoft.com/Forums/en/vstswebtest/thread/3f1b16d7-2343-4409-9eff-6f8c764dd93b)

Performance Counter Sets

While scenarios define the run conditions for a given load test, performance counter sets are where load tests derive their real value.  Load tests can be configured to monitor performance counters on both local and remote machines through the selection of counter sets.  The data from these counter sets is collected and maintained by Visual Studio.  When the test run completes the collected data will be tabulated and displayed for analysis.

Run Settings

Once a scenario is configured and the performance counter sets are defined we can specify our run settings.  Run settings include timing, number of iterations, sampling rate, and validation level.

For test timing we have two values to configure: Warm-up duration and run duration.  Warm-up duration is the amount of time where nothing in the test is tracked.  Once the warm-up timer expires data collection begins and continues until the run duration time is reached.  By default there is no warm-up period and the run duration is ten minutes.

As an alternative to controlling the test by timing we can specify the number of test iterations.  By default the test will execute 100 times.

The sampling rate controls how often performance counters are polled.  The sampling rate is defaulted to five seconds.

As previously mentioned, validation rules have a level property that can be used to ignore certain rules depending on the test’s level setting.  Just like the validation rules the validation level for the test can be set to high, medium, or low.  The default validation level is high meaning that all rules will be invoked.  To filter out validation rules select a lower level.

Analyzing Load Test Results

Running a load test is just like running a Web performance test but the test results are much more detailed.  During the course of a test run Visual Studio is collecting performance data according to the defined performance counter sets.  The Load Test Monitor is used to view and analyze the collected performance data.

By default the Load Test Monitor displays the results in graph form.  The graph view provides a way to see performance data over time.  This is useful for quickly identifying possible issues.  To really get into the details though we need to switch over to the tabular view by clicking the “Tables” button.

The tabular view provides a select box listing several tables:

Tests displays summary information for each test in the load test.

Errors displays details about any errors encountered during the test run.

Pages lists all pages that were accessed during a test run along with summary data such as average response time.

Requests includes details for all HTTP requests issued during a test run.

Thresholds displays information about each threshold violation encountered.

Transactions provides information about Web performance transactions or unit test timers.  This does not refer to database transactions.

Note: Depending on load test configuration three additional tables may be available: SQL Trace, Agents, and Test Details.

Summary

We’ve only looked at two of the available test types but it should be apparent how powerful Visual Studio’s testing facilities can be.  Individually each of the test types provide a great deal of insight into the health of an application.  Using multiple test types in conjunction with each other can reduce the amount of effort required to isolate trouble spots.

Web performance tests make it possible to ensure that a Web application is functioning correctly.  They can validate that the correct information is appearing on pages and that pages are being processed within an acceptable time frame.

Load tests build upon other tests to determine how an application or part of an application handles being stressed.  They let us see how user will experience an application under a variety of network or browser conditions.

The information provided by these tests is invaluable.  Together, these tests can help us identify and address potential trouble spots before our customers do.  Should customers report an issue we can use these tests to more accurately simulate their environment and confirm their findings in a controlled manner.

It is certain that building these tests can take some time but given their reusable nature it shouldn’t take too long to start building a library of tests that can be applied or adapted to multiple situations.  In any case, automated testing beats synchronizing watches and hovering over a button waiting for that special moment to click…