Selenium: Early Thoughts on Test Automation

I have recently been running a trial of Selenium to automate some of our regression and integration testing. I have only been looking into this for a short amount of time so I am by no means an expert but this post contains a few of my observations so far.

For those of you that are not familiar with it, Selenium is a browser automation system that allows you to write integration tests to control a browser and check the response of your site. An example of a Selenium script might look like this:

  1. Open the browser
  2. Browse to the login page
  3. Enter “user 1” in the input with ID #username
  4. Enter “pa$$word” in input with ID #password
  5. Click the Login button and wait for the page to load
  6. Check that the browser has navigated to the users home page

Selenium as a framework comes in 2 flavours: IDE & WebDriver.

Selenium IDE

IDE uses a record-and-playback system to define the script and to run the tests. It is implemented as a FireFox plugin and is therefore limited to FireFox only.

We had run a previous trial using this version where we attempted to have our QA team record and execute scripts as part of functional and regression testing. We found that this had a number of problems and eventually abandoned the trial:

  • Limited to FireFox
  • Has to be run manually (i.e. Cannot be run automatically on a build server)
  • Often requires some basic understanding of JavaScript or CSS selectors to work through a problem in a script; this was sometimes beyond the technical knowledge of our QA team
  • Automatically-generated selectors are often extremely fragile. Instead of input#password, it might generate body > div.main-content > form > input:last-child. This meant that a lot of time was lost to maintenance and that the vast majority of “errors” reported by the script were actually incorrect selectors.

We decided that there we too many disadvantages with this option and so moved onto Selenium WebDriver.

Selenium WebDriver

WebDriver requires that all scripts are written in the programming language of your choice. This forced the script-writing task onto our development team instead of QA, but also meant that development best-practices could be employed to improve the quality and maintainability of the scripts.

This version of Selenium also (crucially) supports multiple browsers and can be run as part of an automated nightly build so seemed like a much better fit.

Whilst writing our first few Selenium tests we came up with a few thoughts on the structure

Use a Base Fixture for Multiple Browser Testing

This is a nice simple one – we did not want to write duplicate tests for all browsers so we made use of the Generic Test Fixture nUnit feature to automatically run our tests in the 4 browsers in which we were interested.

We created a generic base fixture class for all our tests and decorated it with the TestFixture<TDriver> attribute. This instructs nUnit to instantiate and run the class for each of the specified generic types, which in turn means any test we write in such a fixture will automatically be run against each browser

[TestFixture(typeof(ChromeDriver))]
[TestFixture(typeof(InternetExplorerDriver))]
[TestFixture(typeof(FirefoxDriver))]
public abstract class SeleniumTestFixtureBase<TWebDriver>
	where TWebDriver : IWebDriver
{
	protected IWebDriver Driver { get; private set; }

	[SetUp]
	public void CreateDriver()
	{
		this.Driver = DriverFactory.Instance
			.CreateWebDriver<TWebDriver>();
			
		//...
	}
}

This does have some disadvantages when it comes to debugging tests as there are always 4 tests with the same method name but this has only been a minor inconvenience so far – the browser can be determined from the fixture class name where needed.

Wrap Selectors in a “Page” Object

The biggest problem with our initial trial of “record and playback” automated tests was the fragility of our selectors. Tests would regularly fail when manual testing would demonstrate the feature clearly working, and this was almost always due to a subtle change in the DOM structure.

If your first reaction to a failing test is to say “the test is probably broken” then your tests are useless!

A part of the cause was that the “record” part of the feature does not always select the most sensible selector to identify the element. We assumed that by hand-picking selectors we would automatically improve the robustness (is that a word?) of our selectors, but in the case where they did change we still did not want to have to update a lot of places. Similarly, we did not want to have to work out what a selector was trying to identify when debugging tests.

Our solution to this was to create a “Page” object to wrap the selectors for each page on the site in meaningfully named methods. For example, our LoginPage class might look like this:

public class LoginPage
{
	private IWebDriver _driver;

	public LoginPage(IWebDriver driver)
	{
		_driver = driver;
	}

	public IWebElement UsernameInput()
	{
		return _driver.FindElement(By.CssSelector("#userName"));
	}

	public IWebElement PasswordInput()
	{
		return _driver.FindElement(By.CssSelector("#Password"));
	}
}

This has a number of advantages:

  • Single definition of the selector for a given DOM element
    We only ever define each element once
  • Page inheritance
    We can create base pages that expose page elements which appear on multiple pages (e.g. the main navigation or the user settings menu)
  • Creating helper methods
    When we repeat blocks of functionality (e.g. Enter [usename], enter [password] then click Submit) we are able to encapsulate them on the Page class instead of private methods within the test fixture.

We also created factory extension methods on the IWebDriver element to improve readability

public static class LoginPageFactory
{
	public static LoginPage LoginPage(this IWebDriver driver)
	{
		return new LoginPage(driver);
	}
}

//...
this.Driver.LoginPage().UsernameInput().Click()

Storing Environment Information

We decided to store our environmental variables in code to improve reuse and readability. This is only a minor point but we did not want to have any URLs, usernames or configuration options hard coded in the tests.

We structured our data so we could reference variables as below:

TestEnvironment.Users.AdminUsers[0].Username

Switching between Debug & Release Mode

By storing environment variables in code we created another problem: how to switch between running against the test environment and against the local developer environment.

We solved this by loading certain changeable elements of our configuration from .config files based on a #DEBUG flag

Other Gotchas

  • The 64bit IE driver for Selenium IDE is incredibly slow! Uninstall it and install the 32-bit one
  • Browser locale can – in most cases – be set using a flag when creating the driver. One exception to this is Safari for Windows, which does not seem to allow you to change the locale at all – even through Safari itself!

Summary

We are still in the early phases of this trial but it is looking like we will be able to make Selenium automation a significant part of our testing strategy going forward.

Hopefully these will help out other people. If you have any suggestions of your own then leave them in the comments on message me on Twitter (@stevegreatrex).

Advertisements

Keep IIS Express Running in Visual Studio 2013

Since upgrading to Visual Studio 2013 I’ve noticed a change in behaviour in IIS.  It still starts up when you start debugging a web project – same as it always has – but since the upgrade it automatically shuts down when you stop debugging.

As with most IDE behaviour, I was pretty familiar with the old way and so I found it incredibly frustrating whenever this happened.  The good news is that there’s a very simple solution: disable Edit and Continue for the project in the Properties dialog.

edit continue

Hopefully this will save someone else some pain!

Creating NuGet packages with Grunt

Grunt is a JavaScript task runner that can  be used to automate (among other things) the various tasks around building and deploying JavaScript: concatenation, minification, JSHint, QUnit. etc. etc.

There are plenty of example configurations for the tasks above – particularly this example from the grunt documentation – but I wanted to do one more step at the end of my build: create a NuGet package.

Setup

For this example, let’s assume that we have the following structure:

 
+ src
   - source1.js
   - source2.js
 + dist
   - compiled.js
   - compiled.min.js
 - grunt.js
 - package.json

Here we have 2 source files (under src) that have been compiled into compiled.js and compiled.min.js under the dist folder.  For the purposes of this example it doesn’t matter whether they have been created manually or have been generated by an earlier grunt task; we just want to make sure that they are there.

In addition to these we have gruntfile.js and package.json which define the available grunt tasks and the package.  The grunt docs cover these files in detail so I’m not going to go over the initial contents, but they are included below for reference.

package.json

{
  "name": "my-library",
  "version": "0.1.2",
  "devDependencies": {
    "grunt": "~0.4.1"
  }
}

The important things to note here are the definition of the package name and package version – we will use those later.

gruntfile.js

module.exports = function (grunt) {

    grunt.initConfig({
        pkg: grunt.file.readJSON("package.json")
        //other task configuration here
     });

    grunt.registerTask("default", [/*other tasks here/*]);

};

The gruntfile would ordinarily contain the definitions of all of the other tasks that have been configured, but for clarity I have omitted them.

The NuGet Definition

The first step in automatically generate our NuGet package is to create a definition for the package in a Nuspec file.  This is an XML document with some pretty-much self explanatory fields:

<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
	<metadata>
		<!--
		ID used to identify the nuget package - 
		e.g. ProxyApi -> https://www.nuget.org/packages/ProxyApi
		-->
		<id>my-library</id>
		
		<!--
		Package version - we are going to leave this blank for now
		-->
		<version>0.0.0</version>
		
		<!-- Author, Owner and Licensing details -->
		<authors>Steve Greatrex</authors>
		<owners>Steve Greatrex</owners>
		<requireLicenseAcceptance>false</requireLicenseAcceptance>
		<copyright>Copyright 2013</copyright>
		
		<!-- General Information -->
		<description>Helper extensions for KnockoutJs</description>
		<releaseNotes>-</releaseNotes>
		<tags>JavaScript KnockoutJS</tags>
		
		<!-- 
		Packages (incl. minimum version) on which this package
		depends 
		-->
		<dependencies>
		  <dependency id="knockoutjs" version="2.1.0" />
		  <dependency id="jQuery" version="1.8.0" />
		</dependencies>
	</metadata>
	<files>
		<file src="dist\compiled.js" target="content\scripts" />
		<file src="dist\compiled.min.js" target="content\scripts" />
	</files>
</package>

Important things to note here:

  • Dependent packages are listed under package/metadata/dependencies.  These define which other NuGet packages will be installed as prerequisites to your package
  • Package outputs (i.e. what will get inserted into the project) are listed under package/files.
    • File locations are relative to the path from which we will run NuGet.exe
    • For content files (i.e. not a referenced assembly), paths are relative to a special “content” path which refers to the root of the project.  In the example above, the files will be added to the scripts folder in the project root

We can now pass this file to NuGet.exe and it will generate a .nupkg package for us that is ready to be published.  To test this, let’s put a copy of NuGet.exe alongside the saved my-library.nuspec file and run the following command:

nuget pack my-library.nuspec

Voila – my-library.0.0.0.nupkg has been created!

Invoking NuGet from Grunt

In order to invoke this command from our grunt script we will need to create a task.  To keep things simple we will use the basic syntax for the custom task:

grunt.registerTask("nuget", "Create a nuget package", function() {
    //do something here
});

The implementation of task just needs to call the NuGet.exe as we did above, but with a couple more parameters.  We can achieve this using grunt.util.spawn to asynchronously invoke a child process.

grunt.registerTask("nuget", "Create a nuget package", function () {
	//we're running asynchronously so we need to grab
	//a callback
	var done = this.async();
	
	//invoke nuget.exe
	grunt.util.spawn({
		cmd: "nuget.exe",
		args: [
			//specify the .nuspec file
			"pack",
			"my-library.nuspec",

			//specify where we want the package to be created
			"-OutputDirectory",
			"dist",

			//override the version with whatever is currently defined
			//in package.json
			"-Version",
			grunt.config.get("pkg").version
		]
	}, function (error, result) {
		//output either result text or error message...
		if (error) {
			grunt.log.error(error);
		} else {
			grunt.log.write(result);
		}
		//...and notify grunt that the async task has
		//finished
		done();
	});
});

Things to note here:

  • This code should appear in gruntfile.js after the call to initConfig but before the call to registerTask
  • Arguments to NuGet.exe are specified in the args array
  • The -Version argument is used to override the value from the .nuspec file with whatever has been loaded from package.json.  This avoids the need to define the version number in multiple locations.

With this in place we can add the nuget task to the default task definition from our original gruntfile and we are ready to go.

grunt.registerTask("default", [/*other tasks here*/ "nuget"]);

Fire up the shell run the grunt command, and a nuget package will be created for you:

$ grunt
Running "nuget" task
Attempting to build package from 'my-library.nuspec'.
Successfully created package 'dist\my-library.0.1.2.nupkg'.
Done, without errors.

Combine this with concatenation, minification, unit tests and static code analysis and you have a very quick one-hit process to go from source to publication!

Publish an Azure Web Site from the Command Line

Azure Web Sites, though still in preview, are a great way of quickly hosting a scalable site on Windows Azure without the overhead  of setting up and maintaining a virtual machine.

One of the great features is the ability to use integrated Publish tools to push the project to the server. No need to manually build, package, transfer and deploy your code – Visual Studio handles everything for you.

Publishing from Visual Studio

Publishing refers to the process of deploying your ASP.NET web site to Azure server, and when working from Visual Studio is very simple.  A few good walkthroughs are available elsewhere so I won’t repeat them here; to summarise:

  1. Download publish settings from the Windows Azure management portal
  2. Configure a publish profile in the ASP.NET project
  3. Run the Publish wizard to push to Azure

This is very useful when getting started, but in the real world you don’t want to publish to the server from Visual Studio; you want to do it from your build server having run unit tests, code coverage, etc. etc.

My build server is running TeamCity, so using MSBuild from the command line seems to be a good route to take. Let’s take a look at how we can get MSBuild to run that same publication for us.

Publishing using MSBuild

Of the 3 steps for publishing from Visual Studio, the first two are setup steps that need only be performed once.  As the output from these steps can be saved (and checked into source control), we are only interested in invoking step 3, but before we can do that we need to make a couple of amendments to the publish profile from step 2.

Modifying the Publish Profile

In step 2 above we created a publish profile that is located in the Properties\PublishProfiles folder:

MyProject
 + Properties
   + PublishProfiles
     - DeployToAzure.pubxml

Note: by default, the pubxml file is named something like [AzureSiteName] – Web Deploy.pubxml; I have renamed it here to remove the reference to the site name.

Let’s take a look at that generated XML.

<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <WebPublishMethod>MSDeploy</WebPublishMethod>
    <SiteUrlToLaunchAfterPublish>http://[AzureSiteName].azurewebsites.net</SiteUrlToLaunchAfterPublish>
    <MSDeployServiceURL>waws-prod-db3-001.publish.azurewebsites.windows.net:443</MSDeployServiceURL>
    <DeployIisAppPath>[AzureSiteName]</DeployIisAppPath>
    <RemoteSitePhysicalPath />
    <SkipExtraFilesOnServer>True</SkipExtraFilesOnServer>
    <MSDeployPublishMethod>WMSVC</MSDeployPublishMethod>
    <UserName>$[AzureSiteName]</UserName>
    <_SavePWD>True</_SavePWD>
    <PublishDatabaseSettings>
	  <!-- omitted for brevity -->
    </PublishDatabaseSettings>
  </PropertyGroup>
  <ItemGroup>
    <MSDeployParameterValue Include="$(DeployParameterPrefix)DefaultConnection-Web.config Connection String" />
  </ItemGroup>
</Project>

Most of the properties here are self explanatory and have obviously come from the fields that were filled out during the the publish wizard in Visual Studio.  We need to make 2 changes to this XML in order to use it from the command line:

  1. Specify the password from the publish settings
  2. Allow unsecure certificates (no idea why, but we get exceptions if we skip this step)

It goes without saying that, because we are going to save a password, be careful where you save this file.  If there is any risk of this publish profile becoming externally visible then do not save the password.

Assuming that you have decided it is safe to store the password, we need to find out what it should be.  Go back to step 1 in the original Visual Studio instructions and find the downloaded publish settings file named [AzureSiteName].azurewebsites.net.PublishSettings.

This file is plain XML and contains profile details for both Web Deploy and FTP publication.  Locate the userPWD attribute on the Web Deploy node and add a Password node with this value to the publish profile. We can also add the AllowUntrustedCertificates node needed to avoid certificate exceptions.

<PropertyGroup>
  <!-- ...as before... -->
	
  <Password>[password from .PublishSettings file]</Password>
  <AllowUntrustedCertificate>True</AllowUntrustedCertificate>
</PropertyGroup>

That’s all we need to change in the publish profile; now let’s take a look at the MSBuild command.

The MSBuild Command

Now that we have the publish profile configured, the MSBuild command is very simple:

msbuild MyProject.sln
 /p:DeployOnBuild=true
 /p:PublishProfile=DeployToAzure
 /p:Configuration=Release

The 3 flags tell MSBuild to deploy once the build completes; to use the DeployToAzure publish profile we have been modifying; and to build in Release mode (assuming you want to publish a release build).

You can run this from the command line on your local machine or as part of your build process and the site will be rebuilt and deployed directly to Azure!

Visual Studio 2010: Massive Build Performance Improvements

The solution that I work on daily within VS2010 has around 70 projects, and over time has begun to take longer and longer and loooonger to build.  When I had enough time to walk away, make a cup of tea and come back between F5 and the login screen I figured I should do something about it.

Disable Code Analysis

The first major improvement was to set up a build configuration that disabled code analysis (FxCop) when I was working day to day.  I still want code analysis running on the build server, but not every time I try to run the application.

Firstly, create a new build configuration:

Create new build configuration

Then go through each project configuration and disable code analysis in the project properties:

Disable Code Analysis

VS Build Logging

Disabling FxCop made a bit of a difference, but far and away the biggest improvement  was down to changing the logging verbosity on VS options (under Tools -> Options -> Projects and Solutions -> Build and Run):

VS Build Options

Apparently a huge amount of my build time was down to Visual Studio telling me how long it was taking to build!