Visual Studio 2010

VS2010: Web Standards Update and the CSS Editor

I’ve been doing a bit of Web UI work recently and everything was going fairly smooth until yesterday when I tried opening one of my CSS files in Visual Studio and was promptly greeted with a dialog reading “The operation could not be completed. Unspecified error” and the editor never opened.  A little hunting for the error message and “CSS editor” revealed that the Web Standards Update for Microsoft Visual Studio 2010 SP1 was likely to be the culprit and uninstalling should resolve the issue.

I closed Visual Studio, uninstalled the update, and sure enough, when I reopened my project the CSS file opened just fine. I haven’t reinstalled the update yet. Maybe I’ll try again soon but it seems from the comments on the component’s page that this is a common issue.

More information:

Advertisement

Operation could destabilize the runtime

Today I was trying to run the code for a project I’ve just been assigned to.  I’d brought down the code from SVN, built the common libraries, and punched F5.  Build succeeded.  Before long the browser loaded and the beautiful new UI stared back and virtually begged me to start clicking around.  Before I could do anything though Visual Studio rudely interrupted with an unhandled exception dialog.  This one looked nasty, particularly since I’d never seen it before: System.Security.VerificationException – Operation could destabilize the runtime…

I found a Stack Overflow question about this that pointed to Json.NET as a possible culprit.  Sure enough, the source of the exception was Newtonsoft.Json.  It seems that Visual Studio Ultimate’s IntelliTrace didn’t like something Json.NET was doing and would throw that exception.  The issue is said to be resolved as of release 6 but I haven’t upgraded the assembly yet.

For the time being I’ve added a rule to exclude *Newtonsoft.* from the IntelliTrace modules list as recommended by in the Stack Overflow answer.  Since excluding the assembly I haven’t seen the problem again.

VS2010: Box Selections

When I first saw the box selection capabilities in Visual Studio 2010 I thought “that’s kind of neat but I’ll probably never use it” and promptly moved on.  I couldn’t have been more mistaken.  In fact, nearly two years later, box selection has become one of those features that I use almost daily.  What surprises me now though is how many developers I run into that still don’t know about them.

Box selections let us quickly make the same change to multiple lines simultaneously.  Creating them is easy – just hold shift+alt and use the arrow keys or hold the alt key while left drag the mouse to define a rectangle.  If you just want to insert the same text onto multiple lines you can define a zero-length box by expanding the box vertically in either direction.

Non-virtual Properties

So what makes box selections so useful?  Some of the things I find them most useful for are changing modifiers and making local variables implicitly typed.  To illustrate, let’s take a look at a few non-virtual properties that we’d like to make virtual.

Zero-Length Selection

Making these properties virtual without a box selection certainly isn’t difficult but it’s definitely tedious.  A box selection lets us make them all virtual at the same time so we can get on with the task at hand.  The thin blue line immediately following the public modifier on each property identifies the zero-length box that serves as the point where we’ll insert in the virtual modifier.

Virtual PropertiesTo insert the virtual modifier we just need to type (or paste) “virtual.”  Here you can see that each property is now virtual and the zero-length box has moved to the end of the inserted text.  What if we decide later though that these properties shouldn’t be virtual after all?

Box SelectionWe can use box selections to remove the virtual modifier from each property just as easily.  In the example to the left we see a box selection highlighting the virtual modifier on each line.  Non-virtual Properties 2To remove the text we can simply delete it.  This will leave us with a zero-length box where the virtual modifiers used to be.  We can then simply click or arrow away to clear the box selection.

Box selections can go a long way toward increasing your productivity by reducing some of the more tedious aspects of programming.  The few seconds they save here and there can really add up over the course of a day.  More importantly though, that time can be spent on the real problems we’re trying to solve.

Further Reading

How to: Select and Change Text

Debugging XSLT in VS2010

My team has been working on some new functionality that’s really configuration heavy.  One of the challenges we’ve been facing with this project is managing all of the changes to the central configuration file.  In an attempt to add some order to the chaos we decided to start with a base file and apply a series of XSLT sheets to it so we can ensure a consistent state.  I haven’t worked with XSLT in years so aside from relearning the language, one of the first things I did was start looking for a tool that could help me build and test my new sheet.

(more…)

October Speaking Engagement – Indy TFS

I’ll be presenting Web performance and load testing in Visual Studio 2010 to the Indy TFS user group on October 5, 2011.  In this talk we’ll explore some of the basic test management capabilities in Visual Studio 2010 before diving in to building and executing both Web performance and load tests.  Some areas we’ll examine include:

  • Test recording tests
  • Parameterizing tests
  • Extraction rules
  • Validation rules
  • Data binding
  • Load test scenarios

Location

Microsoft Office
500 East 96th St
Suite 460
Indianapolis, IN 46240
[Map]

Doors open at 5:30 PM with the meeting starting at 6:00.  Pizza and drinks will be provided.

Register at https://www.clicktoattend.com/invitation.aspx?code=157231.

I hope to see you there!

VS2010: Find in Files

I really debated with myself about writing this post but after an exchange with a friend I decided to go ahead with it.  My team recently upgraded to new laptops with a fresh new image.  When one of my friends remarked that he was glad that the new image contained wingrep I asked why he didn’t just use Visual Studio’s Find in Files feature.  His reason?  He didn’t know about some of its features.  After talking about it a bit he said he’d give it another shot.  Now, several days later he’s using it almost exclusively.

(more…)

VS2010 SP1 Now Available

Visual Studio Magazine is reporting that Microsoft has released Visual Studio SP1 and it is now available for download.  Among other things the service pack addresses issues with general stability, the editor, the shell, and IntelliTrace.  It also improves support for 64-bit environments, Silverlight 4, IIS Express and a variety other areas.  A full description of the service pack is available from Microsoft Support.

Which Files Changed?

One of the complaints I’ve heard about TFS 2010 version control is that getting latest doesn’t show a listing of what was changed. This is untrue. TFS does indeed show such a listing it’s just not as obvious as in other version control systems.

After getting latest, jump over to the output window and select the “Source Control – Team Foundation” option from the “Show output from:” menu. The output window will then display the action taken and file name for each affected file.

It would be nice if TFS would present this information in a sortable/filterable grid format instead of relying on the output window but having the simple listing is typically more than enough. The output window itself is searchable so if you’re looking for changes to a specific file or folder just click in the window and press Ctrl + F to open the find dialog.

Now the next time someone claims that “TFS doesn’t show me which files changed!” you can tell them to check the output window.

Web Performance & Load Testing in Visual Studio 2010

Visual Studio 2010 Premium and Ultimate include tools for authoring and executing a variety of test types.  Out of the box we get support for several types of tests:

Manual tests require a person to manually perform steps according to a script.  Note that manual tests are actually TFS Work Items rather than code files within a test project.

Unit tests are low-level coded tests meant to exercise small areas of code (units) to ensure that code is functioning as the developer expects.

Database unit tests are intended to exercise database objects such as triggers, functions, and stored procedures.

Coded UI tests programmatically interact with the user interface components of an application to ensure proper functionality.

Web performance tests are intended to verify the functionality or performance of Web applications.

Load tests verify that an application will still perform adequately while under stress.

Generic tests allow calling tests written for another testing solution such as NUnit.

Ordered tests are a special kind of test that contain other tests that must be executed in a specific sequence.

Web performance and load tests are two of the more interesting types and are the focus of this post.  Before diving into them though we should examine how Visual Studio manages tests.

Managing Tests

This section provides a high-level overview of Visual Studio’s test management capabilities.  Test projects, the test view window, test categories, running tests, and working with test results will all be introduced.

Test Projects

One thing that all test types have in common is that they must be part of a test project.  Test projects are language (C#, VB) specific and created by selecting the corresponding “Test Project” template from the Add New Project dialog box.

There are no restrictions on the mix of tests contained within a test project.  It is perfectly acceptable to have unit tests, Web performance tests, and generic tests within a single test project.  It is also acceptable for a solution to have multiple test projects.

Note: Adding a test project to a solution will also a few items to the solution’s Solution Items folder.  Two of the items are .testsettings files used to control the behavior of tests under different execution scenarios.  The settings used during a test run is determined by the item selected in the Test/Select Active Test Settings menu.  The third item is a .vsmdi file used to manage test lists.

The Test Project template will create an empty unit test by default.  This file can safely be deleted.  If you want to customize what is included in new test projects change the settings in Test Tools/Test Project section of the Visual Studio options.

Adding new tests to a test project is just like adding new classes to any of the standard project types.  Items can be added from either the Project menu or the test project’s context menu.  Both methods allow adding each of the test types described above (except manual) to the test project.

Test View Window

The test view window is the primary interface for managing tests within a solution.  From the test view window we can see a listing of all tests in a solution, filter the list to see only tests within a specific project or category, edit test properties, and select tests to run.

The test view window can be opened by selecting Test View from the Test/Windows menu.

Test Categories

As the number of tests in a solution grows it may be necessary to organize them.  Test categories are a convenient way to add some order to the chaos.  To categorize a test, right click on it and select properties.  Locate the “Test Categories” item and click the ellipses button to open the Test Category editor.  Once inside the Test Category editor categories can be selected or new categories can be created and selected.

In addition to helping organize tests categories can also be used to filter down which tests are included in a given run.  The test view window includes a select box named “Filter Column” that includes an option for “Test Categories.”  Selecting this option and entering a category name in the Filter Text text box is an easy way to display only the tests assigned to a particular category.

Note: The test view window can also be used to manage test lists.  In Visual Studio 2010 test lists have been deprecated and are only used by TFS check-in policies.

Running Tests

The test view window makes selecting tests to run very easy.  To select a single test just click it.  Selecting tests multiple tests is done using the standard ctrl + click and shift + click.  By default every test in the solution is included in the test view window.  The list can be grouped and filtered on a variety of criteria including project and category.

Grouping is a matter of selecting the desired field from the Group By select box.  Filtering is only slightly more involved.  To apply a filter optionally select a field from the Filter Column select box and enter criteria in the Filter Text text box.

Once the desired tests are displayed and selected click the Run Selection button from the test view window’s toolbar.  When a test run starts the test results window will be displayed.

Note: By no means is the test view window the only way to start a test run.  Most test types provide their own run button in their editor.  Unit tests can even be run individually by test method, test class, or namespace.

Test Results Window

Test run status is shown in the test results window.  This window is displayed as soon as a test run is started and is updated as the run progresses.

The test results window displays both summary and test specific information.  The summary line shows overall status of the run along with the number of tests that passed.  Each of the tests included in the test run appears as a row in the results window.  Individual test status is identified graphically as pending, in progress, failed, inconclusive, or aborted in the result column.  Double clicking any test will display the details of the test result.

Web Performance Tests

Web performance tests use manually written or recorded scripts to verify that specific functionality of a Web application is working and performing correctly.  For instance, we can record a test script to ensure that a user can log in to the application.  Web performance tests are also composable so over time we can build a library of reusable scripts to combine in different ways to test different, but related activities.

In this section we’ll walk through the process of creating Web performance tests.  We’ll also examine parameterization, changing request properties, comments, extraction rules, validation rules, data binding, and how to call other Web performance tests.

Creating Web Performance Tests

There are several ways to create Web performance tests:

  1. Recording browser interactions
  2. Manually building tests using the Web Test Editor
  3. Writing code to create a coded Web performance test

By far the easiest method is recording browser interactions.  Recorded scripts can be customized easily.  They can also be used to generate coded Web performance tests.  It is generally recommended to start with a recorded test therefore we will not be discussing coded Web performance tests here.

There are two main ways to record Web performance tests.  Visual Studio includes a built-in Web test recorder but we can also use Fiddler to record sessions and export them to a Visual Studio Web test.

Recording in Visual Studio is generally easier as it automatically performs many tasks such as extracting dynamic parameters, setting up validation rules, and grouping dependency request whereas Fiddler offers more control of what is recorded.  Regardless of which method is used, there will likely be some clean-up work when recording is complete.

Note: Recorded or manually built (non-coded) Web performance tests are XML files stored with an extension of .webtest.

Recording in Visual Studio

Invoking the Web test recorder in Visual Studio is a matter of adding a new Web performance test to the test project.  Visual Studio will automatically start an instance of Internet Explorer with a docked instance of the Web Test Recorder.

  1. Right click the test project
  2. Select Add –> Web Performance Test.
  3. Manually perform the steps (navigation) to include in the script
  4. When finished, click the stop recording button
  5. Wait for Visual Studio to detect dynamic parameters that can be promoted to Web test parameters and review its findings selecting each value that should be promoted.

Exporting Sessions from Fiddler

A full discussion of Fiddler is beyond the scope of this document but it is important to look at its filtering capability to see how we can restrict what it captures down to what is relevant to our test.

The first option we’ll typically set is to Show only Internet Explorer traffic.  Selecting this option will prevent traffic from other browsers or software making http requests from showing in the sessions list.

Another option that is often useful is to change the value of the select box under Response Type and Size to “Show only HTML.”  This option will prevent the session list from showing any dependency requests for additional resources like images or script files.

Note: Fiddler doesn’t record requests targeting localhost.  There are workarounds but it is generally easier to target localhost by its name and parameterize the server name later.

  1. Verify that any desired filters are set
  2. Manually perform the steps (navigation) to include in the script
  3. Select the appropriate sessions
  4. File –> Export Sessions –> Selected Sessions…
  5. Select “Visual Studio Web Test”
  6. Click “Next”
  7. Browse to the target location, typically the test project folder
  8. Name the file
  9. Deselect all Plugins
  10. Include the newly created .webtest file in the test project

Parameterization

Most of the time a recorded script is only the starting point for constructing a Web performance test.  In their initial form recorded test aren’t very flexible.  Luckily we can use parameters to change that.

Whenever a parameter value is used in a request the name appears in a tokenized format.  For example, if we have a context parameter named username and we wanted to reference it in a request we would use {{username}}.

Context Parameters

Context parameters are essentially variables within a Web performance test.  Each step within a test can read from and write to context parameters making it possible to pass dynamic information such as new IDs from step to step.

Parameterizing Web Servers

When a test is recorded the original server name is embedded in each request.  Although it is common to run tests against localhost it is often necessary to run the tests against a different server.  With the server name embedded in each request we cannot satisfy this need.  Fortunately, Visual Studio provides the “Parameterize Web Servers” operation that will automatically extract the protocol and server name from each request and replace it with a context parameter.

To parameterize the Web server for each request:

  1. Click the “Parameterize Web Servers” toolbar button
  2. Verify the server name(s)
  3. Click OK

When the dialog closes each request will be updated to reflect the context parameter(s).  You should also notice that any new parameters have been added to the test’s Context Parameters folder.

Changing the server is simply a matter of changing the value of the corresponding context parameter using the properties window.  Alternatively, clicking the “Parameterize Web Servers” button again will open the “Parameterize Web Servers” dialog.  From this dialog click the Change button and enter the appropriate URL.

It is possible to parameterize additional parts of the request URL but there is no direct support in Visual Studio.  Should the need arise it is generally easier to manually edit the XML adding the context parameter to the context parameters element and replacing the portion of the URL being parameterized with the context parameter token.

Setting Request Properties

One of the benefits of recording Web performance tests is that the value of most of the properties for each request are generated by the recording process.  For the most part modifying these values after the test is recorded isn’t necessary but occasionally we need to tweak something.

Some common scenarios for when property values might need to be adjusted are:

  • Needing to check response time
  • Needing to adjust think time to closer reflect reality
  • Correcting the Expected Response URL when a redirect is encountered

Note: Think times are intended to simulate the amount of time a real user would spend “thinking” before proceeding to the next step.  Think times are disabled by default but are more useful in load tests.

To access a request’s properties right click the request and select the “Properties” option.

Comments

Comments allow documenting portions of a Web performance test.  They can serve as helpful reminders of what a particular portion of a script is doing.

Extraction Rules

As their name implies, extraction rules are used to extract values from a response.  Context parameters are used to store the extracted value for later reference.

Adding an extraction rule to a request is a matter of selecting the “Add Extraction Rule…” option from a request’s context menu.  There are eight built-in extraction rule types to choose from:

  • Selected Option
  • Tag Inner Text
  • Extract Attribute Value
  • Extract Form Field
  • Extract HTTP Header
  • Extract Regular Expression
  • Extract Text
  • Extract Hidden Field
    You should be aware that extraction rules can have a negative impact on test performance.  To mitigate this problem extraction rules should be used judiciously.

Validation Rules

Validation rules ensure that the application under test is functioning properly by verifying that certain conditions are met.

Adding a validation rule to a request is a matter of selecting the “Add Validation Rule…” option from a request’s context menu.  There are nine built-in validation rule types to choose from:

  • Selected Option
  • Tag Inner Text
  • Response Time Goal
  • Form Field
  • Find Text
  • Maximum Request Time
  • Required Attribute Value
  • Required Tag
  • Response URL

Like extraction rules, validation rules can have a negative impact on performance.  Judicious use of validation rules can alleviate this problem.

Note: Performance degradation due to validation rules typically isn’t an issue when running a stand-alone test but can have a much larger impact on load tests.  To address this problem adjust the lower the test level on the affected load test and the most important validation rules.

Binding Tests to a Data Source

There are times when static values or context parameters are sufficient but many tests can benefit from using some level of data binding to get a bit more variation in the data.  To add a data source to a Web performance test either click the “Add Data Source” toolbar button or select the “Add Data Source…” option from the root node’s context menu and follow the prompts in the New Test Data Source Wizard.  Once the data source is defined it can be referenced by requests.

Data can be retrieved from a data source using one of three methods:

  • Sequential – Iterates over the data set and loops back to the beginning if the test includes more iterations than there are records
  • Random – Randomly selects a record from the data source as long as necessary
  • Unique – Just like sequential but without looping

The steps for binding a data source are dependent upon the type of test object.  In many cases once the item is selected opening the property sheet, clicking the drop down arrow, then navigating to the appropriate source and data table is all it takes.  Should you need to bind to a data source manually enter the tokenized form {{dsn.table#column}}.

Transactions

In the context of a Web performance test a transaction is a group of related requests.  Each of the requests inside the transaction can be tracked as a whole unit rather than individually.

To create a transaction:

  1. Select the “Insert Transaction…” option from a request’s context menu
  2. Enter a test name
  3. Select the first and last items
  4. Click OK

When the dialog closes the selected requests will be grouped under a new transaction node.

Calling Other Web Performance Tests

Parameterization, comments, extraction rules, validation rules, and data binding all directly contribute to making individual Web performance tests valuable but one of the most powerful features is making tests call other tests.  For instance, if we have a variety of tests that each require first logging into an application we can isolate the login steps into a separate .webtest and have each of those tests call it before doing anything else.

Although it’s perfectly acceptable to record the common steps separately it is generally easier to start with a full test that already includes the common steps and extract them using the Web test editor.  To extract a test:

  1. Select the “Extract Web Test…” option from the root node’s context menu
  2. Enter a test name
  3. Select the first and last items
    Comments can be very useful for identifying where one set of steps ends and the next begins
  4. Check the “Extracted test inherits test level properties” box
  5. Uncheck the “Copy test level plug-ins and validation rules” box
  6. Click OK

When the dialog closes the items identified in step 3 above will be removed and a call to the newly created test will have taken their place.

When the extracted test is needed by another test it can be inserted into the test by selecting the “Insert Call to Web Test…” option from any node’s context menu.

Load Tests

Load tests verify that an application will perform under stress by repeatedly executing a series of tests and aggregating the results for reporting and analysis.  Load tests are often constructed from Web performance test scripts but most other test types are also allowed.

Note: Visual Studio 2010 includes support for distributed load tests.  Distributed load tests are not discussed here.

Scenarios

Load tests consist of one or more scenarios.  Scenarios define the conditions for the test including how to use think times, load pattern, test mix model, network mix, and browser mix.

Think Time

One of the first options to set for a load test is how to use think times if at all.  When using think times we can select to use the recorded times or to use a normal distribution based upon the recorded times.  Generally speaking, it is best to use the normal distribution to get a more accurate reflection of user load.

Load tests also have an option for think time between iterations.  Use this option to introduce a delay between each iteration.

Load Pattern

The load pattern defines user load through the course of a test.  The two main load pattern types are constant load and step load.

Note: There is a third pattern, goal based, that adjusts load dynamically until certain performance criteria is met.

As its name implies, a constant load pattern keeps a constant load of a specified number of users throughout the duration of the test.  When one iteration completes another will immediately start to keep the load at the same level.

Step load will gradually add user load to over the course of a test.  Load tests using the step load pattern start will start with the specified number of users gradually adding more users up to a defined maximum.

Test Mix and Test Mix Model

The test mix defines which tests are included in the load test.  Before we can define the test mix we must select a test mix model.  Test mix models are control how frequently each test in the mix will be executed.  There are four options for modeling the test mix.

Based on the total number of tests: Each test in the mix is assigned a percentage that indicates how many times it should be run.  Each virtual user executes each test according to the defined percentages.

Based on the number of virtual users: Each test in the mix is assigned a percentage that indicates how many of the total users should invoke it.

Based on user pace: Each test in the mix is run a specified number of times per user per hour.

Based on sequential test order: Each test in the mix will be executed once per user in the order the tests appear in the mix.

Network Mix

Load tests can also simulate different network speeds according to the network mix.  Available options include LAN, WAN, dial-up, cable, and some mobile phone networks.  Each new virtual user will semi-randomly select a network speed according to the network mix distribution.

Note: The network emulation driver must be installed for options other than LAN.  If the driver is not installed when the first non-LAN option is selected Visual Studio will prompt you to install it.

Browser Mix

When a load test is built from one or more Web performance tests we can also specify a browser mix.  Most popular browsers are available for selection.

Note: If another browser is required a new .browser file can be added to the Program Files\Microsoft Visual Studio 9.0\Common7\IDE\Templates\LoadTest\Browsers folder. (http://social.msdn.microsoft.com/Forums/en/vstswebtest/thread/3f1b16d7-2343-4409-9eff-6f8c764dd93b)

Performance Counter Sets

While scenarios define the run conditions for a given load test, performance counter sets are where load tests derive their real value.  Load tests can be configured to monitor performance counters on both local and remote machines through the selection of counter sets.  The data from these counter sets is collected and maintained by Visual Studio.  When the test run completes the collected data will be tabulated and displayed for analysis.

Run Settings

Once a scenario is configured and the performance counter sets are defined we can specify our run settings.  Run settings include timing, number of iterations, sampling rate, and validation level.

For test timing we have two values to configure: Warm-up duration and run duration.  Warm-up duration is the amount of time where nothing in the test is tracked.  Once the warm-up timer expires data collection begins and continues until the run duration time is reached.  By default there is no warm-up period and the run duration is ten minutes.

As an alternative to controlling the test by timing we can specify the number of test iterations.  By default the test will execute 100 times.

The sampling rate controls how often performance counters are polled.  The sampling rate is defaulted to five seconds.

As previously mentioned, validation rules have a level property that can be used to ignore certain rules depending on the test’s level setting.  Just like the validation rules the validation level for the test can be set to high, medium, or low.  The default validation level is high meaning that all rules will be invoked.  To filter out validation rules select a lower level.

Analyzing Load Test Results

Running a load test is just like running a Web performance test but the test results are much more detailed.  During the course of a test run Visual Studio is collecting performance data according to the defined performance counter sets.  The Load Test Monitor is used to view and analyze the collected performance data.

By default the Load Test Monitor displays the results in graph form.  The graph view provides a way to see performance data over time.  This is useful for quickly identifying possible issues.  To really get into the details though we need to switch over to the tabular view by clicking the “Tables” button.

The tabular view provides a select box listing several tables:

Tests displays summary information for each test in the load test.

Errors displays details about any errors encountered during the test run.

Pages lists all pages that were accessed during a test run along with summary data such as average response time.

Requests includes details for all HTTP requests issued during a test run.

Thresholds displays information about each threshold violation encountered.

Transactions provides information about Web performance transactions or unit test timers.  This does not refer to database transactions.

Note: Depending on load test configuration three additional tables may be available: SQL Trace, Agents, and Test Details.

Summary

We’ve only looked at two of the available test types but it should be apparent how powerful Visual Studio’s testing facilities can be.  Individually each of the test types provide a great deal of insight into the health of an application.  Using multiple test types in conjunction with each other can reduce the amount of effort required to isolate trouble spots.

Web performance tests make it possible to ensure that a Web application is functioning correctly.  They can validate that the correct information is appearing on pages and that pages are being processed within an acceptable time frame.

Load tests build upon other tests to determine how an application or part of an application handles being stressed.  They let us see how user will experience an application under a variety of network or browser conditions.

The information provided by these tests is invaluable.  Together, these tests can help us identify and address potential trouble spots before our customers do.  Should customers report an issue we can use these tests to more accurately simulate their environment and confirm their findings in a controlled manner.

It is certain that building these tests can take some time but given their reusable nature it shouldn’t take too long to start building a library of tests that can be applied or adapted to multiple situations.  In any case, automated testing beats synchronizing watches and hovering over a button waiting for that special moment to click…

TFS2010: Everyday TFS

I’ve written a bit already about working with Team Foundation Server (TFS) 2010 and have been thinking about a few other topics so I decided to wrap them all into one big post.  I’ve been working with TFS for around nine months now.  My first experiences with TFS were largely negative but as I’ve worked with it I’ve found that much of that negativity was due to a bias against the check-out model for version control and a lack of understanding of what else TFS could do for me and my team.  Over these nine months I’ve come to really appreciate its capabilities and have found ways to work around some of its shortcomings.

The ideas here come straight from my day-to-day experiences working with TFS.  Everything I’m including is something I’ve found that has made me more productive.  I’m by no means a TFS expert and reading this post certainly won’t make you one either but I strongly believe that it will help you work more effectively.

In this post we’ll look at three main areas:

  • Tools
  • Version Control
  • Project Management

I assume at least a basic working knowledge of version control and project management concepts.  As such, I don’t spend much, if any, time discussing basic concepts like performing check ins, defining changesets, or what MS Project does.

I hope you find the tips contained within to be useful and worthy of inclusion into your workflow.

Tools

One of the first thing I noticed when I started with TFS is that its basic tool support is pretty weak.  Many of the tools that ship with TFS are rudimentary at best and other features such as shell integration are just missing.  In this section we’ll examine some workarounds and ways to be more productive.

TFS Power Tools

TFS Power Tools in Extension ManagerThe TFS Power Tools are essential utilities to simplify many day-to-day tasks and circumvent other shortcomings.  I firmly believe that the first thing (ok, second to installing Visual Studio) any developer working with TFS version control on a regular basis needs to do is install the TFS Power Tools.

Installing the Power Tools

  1. Open the Extension Manager from the Tools menu in Visual Studio 2010.
  2. Click the Online Gallery Tab
  3. Locate the Team Foundation Server Power Tools extension
    Depending upon the extension’s popularity it may or may not be listed on the featured on the front page of the online gallery so you may need to search for it
  4. Select the power tools and click the download button.
  5. When prompted Run or Save the installer package
    If there are any instances of Visual Studio running the installer will stop until they are closed.  The installer detects previous versions of Visual Studio too so you’ll need to close those as well
  6. Select either a typical or custom installation
    The typical option should be sufficient for most users but anyone wanting to use the PowerShell Cmdlets will want to select custom install.  The tools I use most frequently are the command line interface, the Visual Studio integration, and the Windows Shell Extension
  7. Click through the remaining pages in the wizard to complete the installation
    A reboot may be necessary to activate the shell extensions

Shell Integration

One of the biggest benefits of using TFS is that it integrates version control and project management capabilities into the Visual Studio IDE.  For developers already spending the majority of their day in Visual Studio this is usually a good thing since Visual Studio manages a lot of the action behind the scenes.  Unfortunately this benefit can also be one of its biggest drawbacks.

TFS uses a check out model for managing versioned files.  Visual Studio performs a silent, implicit check out on any files being edited within the IDE but what happens when we need to edit versioned files outside of Visual Studio?

TFPT Shell ExtensionsAside from a command line utility TFS doesn’t offer any way to manage versioned files outside of Visual Studio!  Unless we’re intimately familiar with the command line utility we’re forced to jump back to VS, go to Source Control Explorer, locate the file(s), and check them out before returning to the other program.  This is the first place that the power tools come to the rescue.

Although the shell integration isn’t as full-featured as what we get with something like Tortoise SVN I still find it sufficient for most tasks.  Depending on what I’m doing I’ll often use it before going to Source Control Explorer for simple tasks.

The shell integration is intended for routine operations like getting the latest version, checking in and out, adding, and comparing files.  There are no options for branching, merging, alerts, or annotating.  A huge miss in the power tools is that there’s no option for history either.

Power Tools Command Line Utility

Command line junkies will appreciate the fact that the power tools also come with a command line utility.  Much of the functionality provided by the power tools is only available through tfpt.exe.  We’ll discuss several of the commands available to us through the command line utility in detail later.  Full documentation of the commands available through tfpt.exe is available through the /? switch.

TFS Command Line Utility

TFS itself ships with a command line utility named tf.exe.  I really don’t find myself using tf.exe all that frequently mostly because of the shell integration supplied with the power tools.  The only command I find myself using with any regularity is tf get.

Basic documentation about tf.exe can be found by running it with the /? switch.  Full documentation is available on MSDN and is easily accessed by running tf msdn.

Changing the Compare and Merge Tools

Any version control system needs tools for comparing files and resolving conflicts and TFS is no exception.  Like many of the tools that ship with TFS the compare/merge tools are pretty simplistic and lack many of the features found in other tools in the same niche.  If you’re used to products such as WinMerge this can be a nuisance but TFS allows us to not only configure which tool we’d like to use for these tasks but to define them by action and file extension.

Note: What follows below are the basic steps for changing the compare and merge tools.  Check your desired tool’s documentation for specific configuration information.

Configure User Tools ButtonCustom tool configuration is rather straight forward:

  1. Open the options dialog in Visual Studio (Tools –> Options)
  2. Expand the Source Control node
  3. Select the Visual Studio Team Foundation Server node
  4. Click the “Configure User Tools…” button

The Configure User Tools dialog is pretty simple.  In addition to allowing adding new tool configurations the dialog lists any configured tools and allows their modification or removal.  The Configure Tool accessed through the Add or Modify buttons is where the real configuration happens.

Configure ToolExtension: The file extension associated with the tool
To indicate that the tool applies to all extensions use .*

Operation: Merge or Compare

Command: The full path of the tool’s executable

Arguments: Any applicable command line arguments
Use the arrow button to select tokens for files or labels.

Version Control

As developers most of our interactions with TFS will be with its version control system.  Given that TFS is intended to be managed within Visual Studio the integration with Visual Studio is pretty solid.  I find that most of the version control related headaches are actually due to the check-out model employed by TFS rather than TFS itself.  That being said, version control in TFS isn’t particularly great and if you’re only using TFS for version control you may consider alternative versioning systems because better dedicated solutions are available.

In this section we’ll examine some of the ways to help keep the check-out model out of the way and decreasing productivity.

Keeping Your Workspace Tidy

When working on projects it’s really easy to neglect basic housekeeping.  Over time workspaces can get really messy and some spring cleaning will be necessary to restore some order.  At first glance cleaning a workspace can seem daunting but TFS Power Tools can make it trivial.

Unchanged Files

One big drawback of check-out model that I’ve previously discussed is that every change, even temporary ones, require files to be checked out.  How many times have do we edit a file to toss in some form of debugging helper just to remove it a few minutes later?  Under the check-out model all of these files are checked out but unchanged.  Visual Studio itself can compound the problem by checking out some files related to projects and solutions but never changing them.

When it’s time to check in the changes all of these unchanged files will be included in the pending changes list.  If any unchanged files are included in the check in TFS will detect that they are unchanged and won’t include them in the changeset.  This behavior is OK but if you’re like me you’ll want to review your changes before actually checking them in just to be sure you’re not about to do something stupid like checking in a forgotten debugger; statement in a JavaScript file (not that I’ve ever done that) One or two unchanged files aren’t too bad to deal with but sifting through more than that is just a waste of time.  Wouldn’t it be nice to undo the check out on the unchanged files before reviewing them for check in?

Without the power tools the only way I know to undo the check out on the unchanged files is to manually compare the files one at a time and undo if there are no changes.  The power tools provide a really handy alternative in the form of the tfpt uu command.

tfpt uu is a command line utility that will undo the check out on all of the unchanged files in a workspace at the same time.  The default behavior will update the workspace and look for unchanged files before prompting to undo them but that behavior can be controlled though some command line switches.  The most useful switches are /recursive that checks filespecs with full recursion and /noget that bypasses getting latest on the workspace before comparing the files.  Full documentation for tfpt uu can be found using the /? switch.

The tfpt uu command has become part of my workflow.  I almost always use it just before a check in and will often run it just to help keep my workspace tidy.  If only there were a similar command for my desk.

Removing Unversioned Files

It is not uncommon for unversioned files to creep into our workspace.  These unversioned files could come from any number of sources: compiled binaries, log files, files we intended to add but changed our mind, etc…  The TFS Power Tools offer an easy way to clean up these files through the tfpt treeclean command.

tfpt treeclean will scan the workspace for all files and folders not under version control and remove them.  A /preview option is available to show which files will be affected without deleting anything.

Getting Latest on Check Out

One thing we noticed with TFS is that it doesn’t automatically get the latest version of a file when it is checked out.  More often than not we’d go to check in a change only to have TFS step in and inform us about changes in the repository that needed to be resolved.  We’d then need to get latest on those files to merge the changes and resolve any conflicts before TFS would allow us to check it in.

We found that in many instances there would be multiple check ins for a file between the last time we got latest and when we checked it out.  This meant that before we even got started making any changes we were already going to have problems trying to check in the file.  So why doesn’t TFS automatically get the latest version upon checkout?

Doug Neumann from the TFS team at Microsoft explained the rationale in a forum post:

It turns out that this is by design, so let me explain the reasoning behind it.  When you perform a get operation to populate your workspace with a set of files, you are setting yourself up with a consistent snapshot from source control.  Typically, the configuration of source on your system represents a point in time snapshot of files from the repository that are known to work together, and therefore is buildable and testable.

As a developer working in a workspace, you are isolated from the changes being made by other developers.  You control when you want to accept changes from other developers by performing a get operation as appropriate.  Ideally when you do this, you’ll update the entire configuration of source, and not just one or two files.  Why?  Because changes in one file typically depend on corresponding changes to other files, and you need to ensure that you’ve still got a consistent snapshot of source that is buildable and testable.

This is why the checkout operation doesn’t perform a get latest on the files being checked out.  Updating that one file being checked out would violate the consistent snapshot philosophy and could result in a configuration of source that isn’t buildable and testable.  As an alternative, Team Foundation forces users to perform the get latest operation at some point before they checkin their changes.  That’s why if you attempt to checkin your changes, and you don’t have the latest copy, you’ll be prompted with the resolve conflicts dialog.

The rationale is understandable but seems a bit idealistic.  Most developers I know aren’t getting latest on the full branch before each and every check out.  Instead they’re doing it once every day or so and otherwise not thinking about it.  This is why there’s an option change the default behavior.

Note: The steps discussed below are for enabling getting latest on check out on a developer’s workstation.  Refer to http://msdn.microsoft.com/en-us/library/bb385983.aspx for information regarding enabling the option at the server level.

To change this setting:

  1. Get Latest on Check OutOpen the options dialog in Visual Studio (Tools –> Options)
  2. Expand the Source Control node
  3. Select the Visual Studio Team Foundation Server node
  4. Click the check box for “Get latest version of item on check out”

Although enabling the option will not eliminate the need to merge changes since it is entirely possible, perhaps even likely, that another user will check in changes prior to your check in it eliminates the need to merge with changes that were already in the repository when you checked it out.  This option also won’t eliminate the need to get latest on any file dependencies but if you were only getting one file at a time prior to check in anyway does that really matter?

Managing Shelvesets

Most version control systems have some mechanism for putting aside or sharing changes without checking them in and TFS is no exception.  Rather than creating patch files like we do in Subversion we “shelve” changes in TFS.

Shelving changes in TFS creates a shelveset.  Shelvesets are useful for preventing unrelated changes from conflicting or transferring code to another developer without checking them in.  In some ways I prefer shelving changes to creating patch files.  The two biggest advantages of shelvesets are:

  • No need to remember where the patch file was saved since all shelvesets are in the same place
  • Sharing changes is easier than dealing with permissions on network shares or e-mailing attachments

Creating a shelveset is easy because there are several readily available ways to do it:

  • File -> Source Control -> Shelve Pending Changes…
  • From the Pending Changes window (View -> Other Windows -> Pending Changes) switch to the Source Files panel and click the Shelve button.
  • Right click on a file or folder in Solution Explorer and select Shelve Pending Changes…
  • Right click on a file or folder in Source Control Explorer select Shelve Pending Changes…

Each of these options will display the Shelve dialog.  The dialog is virtually identical to the Check In dialog so I won’t describe its operation here.

UnshelveRetrieving a shelveset isn’t quite as easy as creating one mainly because there are half as many ways to do it and they can be a bit difficult to find:

  • File -> Source Control -> Unshelve Pending Changes…
  • From the Pending Changes window (View -> Other Windows -> Pending Changes) switch to the Source Files panel and click the Unshelve button.

By default the Unshelve dialog lists the shelvesets owned by the current user.  Finding shelvesets from other users is just a matter of replacing the user name in the Owner name box.  Unfortunately there is no browse for user button so you’ll need to know the user name ahead of time.  Clicking the details button will display a list of the changes included in the shelveset and even lets you compare the shelveset version against the local copy.  Once you’ve found the desired shelveset click the unshelve button but beware, the built-in unshelve function will fail when there are local changes to any of the files in the shelveset.

The power tools provide a solution to this problem as well with the tfpt unshelve command.  tfpt unshelve is a command line utility solves the lack of merge support problem.  In its most basic form it displays a dialog very similar to the built-in dialog shown above. The only immediately noticeable difference is the lack of a delete button but there are more differences like the lack of compare support.

When we no longer need a shelveset we can delete it.  Shelvesets can be deleted from either the built-in Unshelve dialog or from the command line using the tf shelve command with the /delete option.

Tracking Changes

Track Changeset - TimelineProviding tools for determining where a change came from is a key component of any version control system and this is an area where I think TFS excels.  TFS’s built-in tools for checking change history are actually very good.

Viewing history on a file or folder shows a list of changes typical to most version control systems.   The list itself isn’t particularly interesting.  It includes the changeset id, the name of the user that checked in the change, a timestamp, and the changeset comment.  As expected, we can also get a particular version.  We’ll examine some of the more interesting aspects of history and changesets later in the Program Management section but for now we’ll just look at the Track Changeset feature.

Track Changeset - Hierarchy Track Changeset allows us to visually determine where changes came from in a multi-branch repository.  When a file is branched or merged Track Changeset will graphically display the originating and target branch(es) in either a hierarchy or timeline view.  By seeing how and when changes are propagated across the branches we can easily determine when and if a change was successfully applied to a given branch.

Also related to finding the source of a change is the annotation feature.  Annotating a file is roughly the same as the blame feature in Subversion.  Both systems show the contents of the file along with the changeset (revision in SVN).  What sets TFS’s annotate apart though is its ability to drill down into any changes from merge operations to show where a change really came from.

Although the power tools offer a separate annotation utility this is one place I have to recommend the built-in tool.  The power tools viewer is nice in that it shows much of the same information as the built-in viewer but doesn’t include any way to view changeset details nor does it drill down into merges.  The overall appearance of the built-in tool is much more polished as well.

Project Management

Although version control is a major component of TFS it’s just one part of a much larger system.  TFS is really an end-to-end project management system.  If you’re only using TFS for version control you’re missing out on the visibility into the overall health of a project that the project management components can give you.

Linking Items

One way that TFS provides visibility into a project is by allowing us to define links between objects.  Virtually any type of work item can be linked to another work item and TFS also allows linking to changesets or URLs.  Linking individual work items will help define the project plan and structure which is important from a reporting perspective but from a development perspective having links between work items and changesets is invaluable.

In the past my team has used a separate application for defect tracking.  QA would enter defects into the tracking system and if the issue required a code change we’d have to note the defect number in the revision comments and we’d quite often note the revision number in the notes on the defect.  If we ever needed to track down a change later we’d need to know one of two pieces of information (defect number or revision number) and visit two systems to find why a change was made or what the change actually was.  By linking changesets to work items in TFS we only ever need to visit one system to find the answer.  Furthermore, if the project is structured correctly we can easily access additional details about the change by drilling deeper into other links.

Linking objects is very easy.  There are several ways to link objects but for most of them we’ll usually add links by editing a work item.  On the edit work item screen is a tab labeled “All Links” that contains a toolbar with several buttons.  The two buttons we’re interested in are the “New” and “Link to” buttons.

New Linked Item As its name implies, the “New” button allows us to create a new work item and automatically create the link.  Through the dialog we can create new linked item dialog we can select the link type and corresponding link details.  A visual representation of what the link will look like is available for each link type.  We can also specify which work item type we’re linking to for each link type.  One thing we can’t do in this dialog is link to a changeset because changesets are only created by checking in changes.  In order to link to a changeset we need to look at the “Link to” button.

Add Linked ItemWhile the “New” button allows us to link to new items the “Link to” button allows us to link to existing items.  Most of the procedure is the same but linking to existing items gives us a few more options such as linking to changesets, URLs, test cases, or even individual versioned items.  This is great for linking to other work items but there’s a better, less intrusive way to link to changesets to work items.

Sidebar:

When my team started using TFS for project management my director requested that the pilot group update the time remaining and time completed fields on our tasks at the end of each day. This was met with a collective groan and you can probably guess how frequently time was updated.

One reason for the poor adoption is because we aren’t conditioned to record the time anywhere so many of us (myself included) simply didn’t think about it despite his repeated requests. Even when we remembered we had to think about the day and decide how much time to deduct for lunch and other interruptions then try to remember approximately how much time we’d spent on each task. Wouldn’t it be nice to record this information when it was still fresh in our mind rather than trying to remember it at the end of the day?

When we’re selecting work items during check in we can also open them for editing with minimal interruption to the task at hand.  Once I discovered this I started updating all of my tasks at check in rather than at the end of the day.  With this slight change I guaranteed that my tasks were as up to date as possible with the most accurate information possible.

When we’re ready to check in some changes our workflow generally looks something like:

  1. Open the Check In dialog box
  2. Enter a comment describing the change (you are entering comments, right?)
  3. Verify that the correct files are selected
  4. Click the Check In button
    We can link work items to the resulting changeset from the Check In dialog box with a simple modification to the process:
  1. Open the Check In dialog box
  2. Enter a comment describing the change (you are entering comments, right?)
  3. Verify that the correct files are selected
    — New Steps —
  4. Click the Work Items button in the left panel
  5. Click the check box for the corresponding work items (You may need to select a different query or enter some search criteria to find them)
  6. Select the appropriate Check-in Action (Typically you’ll be selecting the Associate option)
    — End New Steps —
  7. Click the Check In button
    Despite having three extra steps the revised process usually only results in a few extra clicks.  Even if it’s necessary to refine the list it is far less interrupting to create the association between the work item and changeset from the Check In dialog than from the work item itself.  Creating the link when we check in our changes as part of our process also ensures that we actually do establish the link rather than waiting and possibly forgetting.
    Once links are established it is very easy to navigate them from the All Links tab.  The links list is grouped by link type and all groups are expanded by default.  Double-clicking on any item will open the appropriate window for that item.  For changesets we see the changeset details window with all of its capabilities.
    If it’s not immediately apparent how powerful the links are just work with it for a bit.  As more links are established finding data relevant to virtually any project related question will become easier.

Microsoft Project Integration

Linking work items in TFS forms the foundation of a project’s structure.  With work items we define time estimates, assign resources, identify dependencies, and project status.  We can view and manage all of this information through Visual Studio but for anything but simple tasks that can be quite cumbersome.  One of the most important features from a project management perspective is the integration with Microsoft Project.

Note: In order for Microsoft Project to interact with TFS Team Explorer must be installed.  Refer to http://msdn.microsoft.com/en-us/library/ms181675.aspx for more information.

From inside of Visual Studio we can open work item queries in Project by either right clicking on a query in the Team Explorer window and selecting Open in Microsoft Project or we can open the query, click the Open in Microsoft Office Button, then select the Open in Microsoft Project option.  We can also open an individual work item and click the Open Work Item in Microsoft Project button.  Any of these actions will open Project and Project will load a new worksheet containing the appropriate work items.

From within Project we create a new worksheet and select a Team Project through either the Team toolbar or menu.  Either option will display the familiar Connect to Team Project dialog.  After selecting a project we need to get the work items (again through either the menu or toolbar).  The Get Work Items dialog allows us to find work items by saved query, ID, or searching by title and type.  Once the list is loaded we can bring them into Project by selecting individual items or all of them.

The work items load into the familiar Gantt chart and we’re free to work with them as we were working on any other Project worksheet.  The big difference between working with TFS connected data and a standard worksheet is for refreshing and saving data.

Rather than saving the worksheet to save changes to the work items we need to publish them back to the server using the Publish toolbar button (or Publish Changes menu item).  Similarly, if we want to update the worksheet with the latest data we’ll need to click the Refresh toolbar button (or Refresh menu item).

Aside from managing work items the Project integration allows us to manage Areas and Iterations.  These can be found by clicking the Edit Areas and Iterations… item under the Team menu.

Alerts

With developers making code changes, managers updating project plans, QA entering defects, scheduled builds, etc… there is a ton of activity going on within TFS at any given time.  TFS includes an alert system that helps us keep up with all of it.

Despite the appearance of the dialog box used to create some of the more common alerts the alert system is actually quite robust.  Behind the basic dialog box is a full featured expression engine used to define the criteria that must be met in order for a notification to be sent.

Creating basic alerts for monitoring files and folders or work items is pretty straight-forward.  Once an alert is configured notifications will be sent whenever changes are made to the associated item.

Add Alert to FolderFiles/Folders

  1. Open Source Control Explorer
  2. Locate the File or folder to be monitored
  3. Right-click the desired item and select Alert on Change…
  4. Verify the field values and change any where the default is not desired
  5. Click OK

Add Alert to Work Item Work Items

  1. Open the results window for a work item query
  2. Locate the work item to be monitored
  3. Right-click the desired item and select Alert on Change…
  4. Verify the field values and change any where the default is not desired
  5. Click OK

Alerts are managed through the Alerts Explorer window which is accessed through the Team menu.  Existing alerts can be modified or deleted and we can also create new, more sophisticated alerts.  I encourage clicking around within the Alerts Explorer window and creating a variety of alerts to see all of the possibilities.

Wrap Up

One of the ideas behind TFS is to facilitate communication across the team and by using the tools more intelligently we can make sure that it has the information it needs to accomplish that goal and help us be more productive.  TFS certainly isn’t without its pain points but by incorporating the tips listed above we can reduce the impact of some of them or work around them altogether.

The TFS Power Tools will help keep the check-out model out of the way.  Changing the default compare and merge tools makes resolving conflicts easier.  Shelvesets let us store away or share a batch of changes without checking them in.  Linking work items with changesets and other work items helps us verify that the project is on track and answer questions about what happened later on.  Alerts proactively keep us informed of what’s going on.  None of these are all that useful on their own but when used in combination their power multiplies and their value becomes apparent.