Faking Mouse Events in D3

D3

D3 is a great library but one of the challenges I have found is with unit testing anything based on event handlers.

In my specific example I was trying to show a tooltip when the user hovered over an element.

hoverTargets
 .on('mouseover', showTooltip(true))
 .on('mousemove', positionTooltip)
 .on('mouseout', closeTooltip);

D3 doesn’t currently have the ability to trigger a mouse event so in order to test the behaviour I have had to roll my own very simple helper to invoke these events.

$.fn.triggerSVGEvent = function(eventName) {
 var event = document.createEvent('SVGEvents');
 event.initEvent(eventName,true,true);
 this[0].dispatchEvent(event);
 return $(this);
};

This is implemented as jQuery plugin that directly invokes the event as if it had come from the browser.

You can use it as below:

$point
 .triggerSVGEvent('mouseover')
 .triggerSVGEvent('mousemove');

It will probably change over time as I need to do more with it but for now this works as a way to test my tooltip behaviour.

 

Advertisements

Selenium: Early Thoughts on Test Automation

I have recently been running a trial of Selenium to automate some of our regression and integration testing. I have only been looking into this for a short amount of time so I am by no means an expert but this post contains a few of my observations so far.

For those of you that are not familiar with it, Selenium is a browser automation system that allows you to write integration tests to control a browser and check the response of your site. An example of a Selenium script might look like this:

  1. Open the browser
  2. Browse to the login page
  3. Enter “user 1” in the input with ID #username
  4. Enter “pa$$word” in input with ID #password
  5. Click the Login button and wait for the page to load
  6. Check that the browser has navigated to the users home page

Selenium as a framework comes in 2 flavours: IDE & WebDriver.

Selenium IDE

IDE uses a record-and-playback system to define the script and to run the tests. It is implemented as a FireFox plugin and is therefore limited to FireFox only.

We had run a previous trial using this version where we attempted to have our QA team record and execute scripts as part of functional and regression testing. We found that this had a number of problems and eventually abandoned the trial:

  • Limited to FireFox
  • Has to be run manually (i.e. Cannot be run automatically on a build server)
  • Often requires some basic understanding of JavaScript or CSS selectors to work through a problem in a script; this was sometimes beyond the technical knowledge of our QA team
  • Automatically-generated selectors are often extremely fragile. Instead of input#password, it might generate body > div.main-content > form > input:last-child. This meant that a lot of time was lost to maintenance and that the vast majority of “errors” reported by the script were actually incorrect selectors.

We decided that there we too many disadvantages with this option and so moved onto Selenium WebDriver.

Selenium WebDriver

WebDriver requires that all scripts are written in the programming language of your choice. This forced the script-writing task onto our development team instead of QA, but also meant that development best-practices could be employed to improve the quality and maintainability of the scripts.

This version of Selenium also (crucially) supports multiple browsers and can be run as part of an automated nightly build so seemed like a much better fit.

Whilst writing our first few Selenium tests we came up with a few thoughts on the structure

Use a Base Fixture for Multiple Browser Testing

This is a nice simple one – we did not want to write duplicate tests for all browsers so we made use of the Generic Test Fixture nUnit feature to automatically run our tests in the 4 browsers in which we were interested.

We created a generic base fixture class for all our tests and decorated it with the TestFixture<TDriver> attribute. This instructs nUnit to instantiate and run the class for each of the specified generic types, which in turn means any test we write in such a fixture will automatically be run against each browser

[TestFixture(typeof(ChromeDriver))]
[TestFixture(typeof(InternetExplorerDriver))]
[TestFixture(typeof(FirefoxDriver))]
public abstract class SeleniumTestFixtureBase<TWebDriver>
	where TWebDriver : IWebDriver
{
	protected IWebDriver Driver { get; private set; }

	[SetUp]
	public void CreateDriver()
	{
		this.Driver = DriverFactory.Instance
			.CreateWebDriver<TWebDriver>();
			
		//...
	}
}

This does have some disadvantages when it comes to debugging tests as there are always 4 tests with the same method name but this has only been a minor inconvenience so far – the browser can be determined from the fixture class name where needed.

Wrap Selectors in a “Page” Object

The biggest problem with our initial trial of “record and playback” automated tests was the fragility of our selectors. Tests would regularly fail when manual testing would demonstrate the feature clearly working, and this was almost always due to a subtle change in the DOM structure.

If your first reaction to a failing test is to say “the test is probably broken” then your tests are useless!

A part of the cause was that the “record” part of the feature does not always select the most sensible selector to identify the element. We assumed that by hand-picking selectors we would automatically improve the robustness (is that a word?) of our selectors, but in the case where they did change we still did not want to have to update a lot of places. Similarly, we did not want to have to work out what a selector was trying to identify when debugging tests.

Our solution to this was to create a “Page” object to wrap the selectors for each page on the site in meaningfully named methods. For example, our LoginPage class might look like this:

public class LoginPage
{
	private IWebDriver _driver;

	public LoginPage(IWebDriver driver)
	{
		_driver = driver;
	}

	public IWebElement UsernameInput()
	{
		return _driver.FindElement(By.CssSelector("#userName"));
	}

	public IWebElement PasswordInput()
	{
		return _driver.FindElement(By.CssSelector("#Password"));
	}
}

This has a number of advantages:

  • Single definition of the selector for a given DOM element
    We only ever define each element once
  • Page inheritance
    We can create base pages that expose page elements which appear on multiple pages (e.g. the main navigation or the user settings menu)
  • Creating helper methods
    When we repeat blocks of functionality (e.g. Enter [usename], enter [password] then click Submit) we are able to encapsulate them on the Page class instead of private methods within the test fixture.

We also created factory extension methods on the IWebDriver element to improve readability

public static class LoginPageFactory
{
	public static LoginPage LoginPage(this IWebDriver driver)
	{
		return new LoginPage(driver);
	}
}

//...
this.Driver.LoginPage().UsernameInput().Click()

Storing Environment Information

We decided to store our environmental variables in code to improve reuse and readability. This is only a minor point but we did not want to have any URLs, usernames or configuration options hard coded in the tests.

We structured our data so we could reference variables as below:

TestEnvironment.Users.AdminUsers[0].Username

Switching between Debug & Release Mode

By storing environment variables in code we created another problem: how to switch between running against the test environment and against the local developer environment.

We solved this by loading certain changeable elements of our configuration from .config files based on a #DEBUG flag

Other Gotchas

  • The 64bit IE driver for Selenium IDE is incredibly slow! Uninstall it and install the 32-bit one
  • Browser locale can – in most cases – be set using a flag when creating the driver. One exception to this is Safari for Windows, which does not seem to allow you to change the locale at all – even through Safari itself!

Summary

We are still in the early phases of this trial but it is looking like we will be able to make Selenium automation a significant part of our testing strategy going forward.

Hopefully these will help out other people. If you have any suggestions of your own then leave them in the comments on message me on Twitter (@stevegreatrex).

Protecting your CouchDB Views

If you work with a SQL or other RDBMS database you most likely have your schema backed up somewhere under source control.  Maybe it’s a bunch of SQL scripts, maybe it’s the classes from which you generated your Entity Framework schema, but you almost certainly have some way of restoring your DB schema into a new database (at least I hope that you do!).

couchdb

But what about CouchDB?

CouchDB, as anyone who has read the first sentence of a beginners guide will know, is a Non-Relational Database and so it does not have a schema.  All of the data is stored as arbitrary JSON documents which can (and do) contain data in a wide range of formats.

The problem is that whilst there is no schema to “restore” into a new database, there is another very important construct: views.

CouchDB Views

Views within CouchDB define how you query the data.  Sure, you can always fall back to basic ID-lookup to retrieve documents, but as soon as you want to do any form of complicated (i.e. useful) querying then you will mostly likely need to create a view.

Each view comprises 2 JavaScript functions: a map function and an optional reduce function.  I don’t want to go into a lot of detail on the map-reduce algorithm or how CouchDB views work under the covers (there are plenty of other resources out there) but the important thing here is that you have to write some code that will play a very significant role in how your application behaves and that should be in source control somewhere!

Storing Views in Source Control

In order to put our view code under source control we first need to get it into a format that can be saved to disk.  In CouchDB, views are stored in design documents and the design documents are stored as JSON, so we can get a serialized copy of the view definitions by just GETting the design document from couch:

curl http://localhost:5984/databaseName/_design/designDocumentName

Pass the output through pretty-print and you will see the contents of the design document in a JSON structure:

{
   "_id": "_design/designDocumentName",
   "_rev": "1-47b20721ccd032b984d3d46f61fa94a8",
   "views": {
       "viewName": {
           "map": "function (doc) {\r\n\t\t\t\tif (doc.value1 === 1) {\r\n\t\t\t\t\temit(\"one\", null);\r\n\t\t\t\t} else {\r\n\t\t\t\t\temit(\"other\", {\r\n\t\t\t\t\t\tother: doc.value1\r\n\t\t\t\t\t});\r\n\t\t\t\t}\r\n\t\t\t}"
        }
   },
   "filters": {
       "filterName": "function () {}"
   }
}

This is, at least, a serialized representation of the source for our view, and there are definitely some advantages to using this approach.  On the other hand, there are quite a few things wrong with using this structure in source control:

Unnecessary Data
The purpose of this exercise is to make sure that the view code is safely recoverable; whilst there is debatably some use in storing the ID, the revision (_rev) field refers to the state of the database and may vary between installations and shouldn’t be needed.

Functions as Strings
The biggest problem with this approach is that the map, reduce and filter functions are stored as strings.  You may be able to put up with this in simple examples, but as soon as they contain any complexity (or indentation, as seen above) they become completely unreadable.  Tabs and newlines are all concatenated into one huge several-hundred-character string, all stored on one line.  Whilst this is not a technical issue (you could still use these to restore the views) it makes any kind of change tracking impossible to understand – every change is on the same line!

As well as the readability issues we also lose the ability to perform any kind of analysis on the view code.  Whether that is static analysis (such as jsLint), unit testing or some-other-thing, we cannot run any of them against a string.

An Alternative Format

Instead of taking a dump of the design documents directly from CouchDB, I would recommend using an alternative format geared towards readability and testability.  You could be pretty creative in exactly how you wanted to lay this out (one file per design document, one file per view…) but I have found that the structure below seems to work quite well:

exports.designDocumentName = {
	views: {
		viewName: {
			map: function (doc) {
				//some obviously-fake logic for demo purposes
				if (doc.value1 === 1) {
					emit("one", null);
				} else {
					emit("other", {
						other: doc.value1
					});
				}
			}
		}
	},
	filters: {
		filterName: function () { }
	}
};

exports.secondDesignDocument = {
	//...
};

This has several advantages over the original format:

  • It is much easier to read!  You get syntax highlighting, proper indentation and the other wonderful features of your favourite code editor
  • There is no redundant information stored
  • jsLint/jsHint can easily be configured to validate the functions
  • By using the AMD exports object, the code is available to unit tests and other utilities (more on that below)

There is one significant disadvantage though: because I have pulled this structure out of thin air, CouchDB has no way of understanding it.  This means that whilst my view code is safe and sound under source control I have no way of restoring it.  At least with the original document-dump approach I could manually copy/paste the contents of each design document into the database!

So how can we deal with that?

Restoring Views

As I mentioned above, one of the advantages of attaching design documents as objects to the AMD exports object is that they can be exposed to node utilities very easily.  To demonstrate this I have created a simple tool that is able to create or update design documents from a file such as the one above in a single command: view-builder.

You can see the source for the command on GitHub or you can install it using NPM.

npm install -g view-builder

After installation you can run the tool like this:

view-builder --url http://localhost:5984/databasename  --defs ./view-definitions.js

This will go through the definitions and for each of the design documents…

  1. Download the latest version of the design document from the server
  2. Create a new design document if none already exists
  3. Compare each view and filter to identify any changes
  4. If changes are present, update the version on the server

The comparison is an important step in this workflow – updating a design document will cause CouchDB to rebuild all of the views within it; if you have a lot of data then this can be a very slow process!

Now we have a human-readable design document definition that can be source-controlled, unit tested and then automatically applied to any database to which we have access.  Not bad…

Other Approaches

Whilst this system works for me, I can’t imagine that I am the first person to have considered this problem.  How is everyone else protecting their views?  Any suggestions or improvements in the comments are always welcome!

Creating NuGet packages with Grunt

Grunt is a JavaScript task runner that can  be used to automate (among other things) the various tasks around building and deploying JavaScript: concatenation, minification, JSHint, QUnit. etc. etc.

There are plenty of example configurations for the tasks above – particularly this example from the grunt documentation – but I wanted to do one more step at the end of my build: create a NuGet package.

Setup

For this example, let’s assume that we have the following structure:

 
+ src
   - source1.js
   - source2.js
 + dist
   - compiled.js
   - compiled.min.js
 - grunt.js
 - package.json

Here we have 2 source files (under src) that have been compiled into compiled.js and compiled.min.js under the dist folder.  For the purposes of this example it doesn’t matter whether they have been created manually or have been generated by an earlier grunt task; we just want to make sure that they are there.

In addition to these we have gruntfile.js and package.json which define the available grunt tasks and the package.  The grunt docs cover these files in detail so I’m not going to go over the initial contents, but they are included below for reference.

package.json

{
  "name": "my-library",
  "version": "0.1.2",
  "devDependencies": {
    "grunt": "~0.4.1"
  }
}

The important things to note here are the definition of the package name and package version – we will use those later.

gruntfile.js

module.exports = function (grunt) {

    grunt.initConfig({
        pkg: grunt.file.readJSON("package.json")
        //other task configuration here
     });

    grunt.registerTask("default", [/*other tasks here/*]);

};

The gruntfile would ordinarily contain the definitions of all of the other tasks that have been configured, but for clarity I have omitted them.

The NuGet Definition

The first step in automatically generate our NuGet package is to create a definition for the package in a Nuspec file.  This is an XML document with some pretty-much self explanatory fields:

<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
	<metadata>
		<!--
		ID used to identify the nuget package - 
		e.g. ProxyApi -> https://www.nuget.org/packages/ProxyApi
		-->
		<id>my-library</id>
		
		<!--
		Package version - we are going to leave this blank for now
		-->
		<version>0.0.0</version>
		
		<!-- Author, Owner and Licensing details -->
		<authors>Steve Greatrex</authors>
		<owners>Steve Greatrex</owners>
		<requireLicenseAcceptance>false</requireLicenseAcceptance>
		<copyright>Copyright 2013</copyright>
		
		<!-- General Information -->
		<description>Helper extensions for KnockoutJs</description>
		<releaseNotes>-</releaseNotes>
		<tags>JavaScript KnockoutJS</tags>
		
		<!-- 
		Packages (incl. minimum version) on which this package
		depends 
		-->
		<dependencies>
		  <dependency id="knockoutjs" version="2.1.0" />
		  <dependency id="jQuery" version="1.8.0" />
		</dependencies>
	</metadata>
	<files>
		<file src="dist\compiled.js" target="content\scripts" />
		<file src="dist\compiled.min.js" target="content\scripts" />
	</files>
</package>

Important things to note here:

  • Dependent packages are listed under package/metadata/dependencies.  These define which other NuGet packages will be installed as prerequisites to your package
  • Package outputs (i.e. what will get inserted into the project) are listed under package/files.
    • File locations are relative to the path from which we will run NuGet.exe
    • For content files (i.e. not a referenced assembly), paths are relative to a special “content” path which refers to the root of the project.  In the example above, the files will be added to the scripts folder in the project root

We can now pass this file to NuGet.exe and it will generate a .nupkg package for us that is ready to be published.  To test this, let’s put a copy of NuGet.exe alongside the saved my-library.nuspec file and run the following command:

nuget pack my-library.nuspec

Voila – my-library.0.0.0.nupkg has been created!

Invoking NuGet from Grunt

In order to invoke this command from our grunt script we will need to create a task.  To keep things simple we will use the basic syntax for the custom task:

grunt.registerTask("nuget", "Create a nuget package", function() {
    //do something here
});

The implementation of task just needs to call the NuGet.exe as we did above, but with a couple more parameters.  We can achieve this using grunt.util.spawn to asynchronously invoke a child process.

grunt.registerTask("nuget", "Create a nuget package", function () {
	//we're running asynchronously so we need to grab
	//a callback
	var done = this.async();
	
	//invoke nuget.exe
	grunt.util.spawn({
		cmd: "nuget.exe",
		args: [
			//specify the .nuspec file
			"pack",
			"my-library.nuspec",

			//specify where we want the package to be created
			"-OutputDirectory",
			"dist",

			//override the version with whatever is currently defined
			//in package.json
			"-Version",
			grunt.config.get("pkg").version
		]
	}, function (error, result) {
		//output either result text or error message...
		if (error) {
			grunt.log.error(error);
		} else {
			grunt.log.write(result);
		}
		//...and notify grunt that the async task has
		//finished
		done();
	});
});

Things to note here:

  • This code should appear in gruntfile.js after the call to initConfig but before the call to registerTask
  • Arguments to NuGet.exe are specified in the args array
  • The -Version argument is used to override the value from the .nuspec file with whatever has been loaded from package.json.  This avoids the need to define the version number in multiple locations.

With this in place we can add the nuget task to the default task definition from our original gruntfile and we are ready to go.

grunt.registerTask("default", [/*other tasks here*/ "nuget"]);

Fire up the shell run the grunt command, and a nuget package will be created for you:

$ grunt
Running "nuget" task
Attempting to build package from 'my-library.nuspec'.
Successfully created package 'dist\my-library.0.1.2.nupkg'.
Done, without errors.

Combine this with concatenation, minification, unit tests and static code analysis and you have a very quick one-hit process to go from source to publication!

My (first attempt) JavaScript Mocking Framework

I’ve been getting much more involved in JavaScript development over the last few months and I have been playing around with QUnit as a unit testing framework.  I’m a big believer in unit testing – so much so that I now find it actively painful to check in code without unit tests – and for the first few tests I wrote it seemed to do everything I needed. I didn’t really need any mock or stub objects, and even when I did, I could generally get away with just replacing the method on the original object:

//WARNING: BAD CODE
test("check that POST is called", function () {
  //mock up a call to $.post and save the parameters
  var postUrl, postCallback;
  $.post = function (url, callback) {
    postUrl = url;
    postCallback = callback;
  };

  //run the method under test
  doSomething();

  //check the saved variables
  equal("/some/url", postUrl);

  //use the callback somehow
  postCallback({ success: true });
});

This got me through my basic needs but it is a pretty messy way of getting the desired effect.  Whenever I’m coding in C# I use the fantastic Moq mocking library and I was sure that there was something out there with a similar syntax and functionality for JavaScript.

I had been looking for a “real” project to drive my JS learning, and this sounded like an interesting idea: the dynamic nature of JavaScript means that there’s none of the IL-injection pain associated with mock frameworks…  why not write one myself?

Requirements

What do I actually need out of my mocking framework?  If I model it on Moq then I want to be able to:

  • Register an expected call to a function
  • Set expected parameters
  • Specify how many times it should be called
  • Hook up callbacks to be executed
  • Verify that all of the above actually took place

Let’s look at this step by step:

Setting Up Calls

When building my Mock object, setting up a call will:

  1. Store a record of the fact that the call has been setup
  2. Add a function to the mock that masquerades as the actual method, finds the matching setup and notify that it was called

if we assume that a Setup class exists that will do all of the parameter matching, callback handling and other magic, then storing the setups becomes pretty simple:

jsMock.Mock = function () {
  var _self = this,

  //array to store all setups on this mock
  _setups = [],

  //sets up a new call
  _setup = function (member) {
    var setup = new jsMock.Setup(member);
    _setups.push(setup);
    return setup;
  };

  this.setup = _setup;
};

This can then be called by:

var mockObject = new jsMock.Mock();
mockObject.setup("post");

Next, we need to add a function to the Mock object that can be called as if it were the real version of the method that we set up:

mockObject.post("url", function () {
  //...
});

JavaScript makes this surprisingly easy – all we need to do is create a function that will go through the list of setups recorded so far and find a match.  We can then add this function to the Mock object:

  //creates a function that will locate a matching setup
  _createMockFunction = function (member) {
    return function () {
      for (var i = 0; i < _setups.length; i++) {
        if (_setups[i].member === member && 
            _setups[i].matches(arguments)) {
          //notify the setup that it was called
        }
      }
    };
  };

  //sets up a new call
  _setup = function (member) {
    var setup = new jsMock.Setup(member);
    _setups.push(setup);

    this[member] = _createMockFunction(member);

    return setup;
  }

Note that I have also assumed that our Setup object has a ‘matches(arguments)’ method that will check the arguments passed into the mocked method against those that have been configured.

Verifying Calls

For the Mock object, verification really just means going through each of the configured setups and verifying them, so the implementation is pretty simple:

  //verify all of our setups
  _verify = function () {
    for (var i = 0; i < _setups.length; i++) {
      _setups[i].verify();
    }
  };

  this.verify = _verify;

Great – that covers the Mock itself.  Now lets move on to the Setup object, where all of the magic happens.

The Setup Object

The Setup object is where the code actually starts to do something. We can start out pretty simple in terms of requirements: we need to be able to match a call to the Mock against a Setup. This means that the Setup needs to store the member name that it is mocking, and (optionally) the parameters.

We can take the member name as a constructor parameter and expose it through a property:

jsMock.Setup = function (member) {
  this.member = member;
};

To setup the parameters we would ideally like to be able to pass them into a method with as little extra syntax as possible:

mockObject.setup("post")
  .with("/some/url", function() { /*...*/ });

To this end, let’s add a ‘with’ method that stores the arguments that are passed in:

  //store any specified parameters
  _expectedParameters,

  //register expected parameters
  _with = function() {
    _expectedParameters = arguments; //store the arguments passed into this method
  };

  this.with = _with;

This approach is fine for simple types (like the string URL) but what about the callback that we expect to be passed into our ‘post’ method? We can’t possibly know what that will be when we setup the mock method, so instead of trying to match it, let’s add a constant that we can recognise to mean “anything”:

mockObject.setup("post")
  .with("/some/url", jsMock.constants.anything);

Now that we’ve created the method to register expected parameters, let’s write something to match against actual parameters.

  //checks that the params object matches the expected parameters
  _matches = function(params) {
    //if expected parameters haven't been specified, match everything
    if (_expectedParameters === null) return true;

    //same number of parameters?
    if (_expectedParameters.length !== params.length) return false;

    //do all parameters match?
    for (var i = 0; i < _expectedParameters.length; i++) {
      if (_expectedParameters[i] !== jsMock.Constants.anything && //ignore the 'anything' constant
          _expectedParameters[i] !== params[i]) return false;
    }

    //it must be a match
    return true;
  };

  this.matches = _matches;

In this method we first check whether or not any parameters have actually been specified for this Setup, then check the number of parameters and finally compare each parameter in turn (ignoring the ‘anything’ constant).

Once we’ve matched a Setup we’ll want to notify that it has been called, so let’s add a ‘called’ method and update our Mock class to call this when it finds a match:

//jsMock.Setup:
  //notifies this setup that it has been called
  _called = function(params) {
  };

  this.called = _called;

//jsMock.Mock:
  _createMockFunction = function (member) {
    return function () {
      for (var i = 0; i < _setups.length; i++) {
        if (_setups[i].member === member &&
            _setups[i].matches(arguments)) {
          _setups[i].called(arguments);
        }
      }
    };
  };

Now that we have an object that we can match against calls, let’s look at how to configure our expectations for that method.

Expectation Management

For our Setup to be of any use we need to be able to do more than just register that it occurred.  Specifically, we want to be able to:

  • Specify a return value
  • Specify the number of times it should be called
  • Specify callbacks that will be executed when it is called

Return Values

Setting up a return value should be configured using something like the following:

mockObject
  .setup("add")
  .with(1, 2)
  .returns(3);

So let’s add a ‘returns’ method that just stores the value passed in.

  
//set up a return value
  _returns = function(returnValue) {
    _self.returnValue = returnValue;
    return _self;
  };

  this.returnValue = null;

Note that we are returning ‘_self’ to allow the Setup methods to support the fluent interface.

Now we need to update our Mock object so that the mock function returns the return value from the Setup.  This is slightly more complicated than it sounds as it is possible to match multiple Setups with a single call.   For simplicity, let’s state that the last Setup that has been configured will set the return value that will be used; now we can update our fake method in the Mock:

//creates a function that will locate a matching setup
  _createMockFunction = function (member) {
    return function () {
      var match;

      //reverse traversing the list so most recent setup is used as match
      for (var i = _setups.length-1; i >= 0; i--) {
        if (_setups[i].member === member &&
            _setups[i].matches(arguments)) {

            if (!match) match = _setups[i];

            _setups[i].called(arguments);
        }
      }
      return match.returnValue;
    };
  };

Note that we are traversing the list of setups in reverse order to make sure we use the most recent matching Setup.

Expected Number of Calls

When we specify the number of calls we expect, we really want to be able to specify a range: “no more than 3” or “at least 2”.  In the interests of creating a more fluent API, let’s put all of the time-specification methods within a ‘times’ object so that we can set these up using something like the below:

mockObject
  .setup("post")
  .times.noMoreThan(3);

Ideally (and stealing from Moq syntax), we want the following options:

  • once – exactly one call
  • never – zero calls
  • noMoreThan – up to [num] calls
  • atLeast – [num] or more calls
  • exactly – exactly [num] calls

To achieve this, let’s set up an object to store the number of expected calls and a series of methods to set those properties:

  //store expected call counts
  _expectedCalls = { min: 0, max: Nan },
  _times = {
    exactly: function(num) {
      _expectedCalls.min = _expectedCalls.max = num;
      return _self;
    },
    once: function() {
      return exactly(1);
    },
    never: function() {
      return exactly(0);
    },
    atLeast: function(num) {
      _expectedCalls.min = num;
      _expectedCalls.max = Nan;
      return _self;
    },
    noMoreThan: function(num) {
      _expectedCalls.min = 0;
      _expectedCalls.max = num;
      return _self;
    }
  };

  this.times = _times;

Next up, let’s make sure we can verify that the expected number of calls have actually been made.  We’ll need to go back and update our ‘called’ method to record the incoming calls…

  //notifies this setup that it has been called
  _calls = [],
  _called = function(params) {
    _calls.push(params);
  }

…and then add a new ‘verify’ method to check the count:

  //verify that the number of registered calls is within range
  _verify = function() {
    if (_calls.length < _expectedCalls.min || _calls.length > _expectedCalls.max) {
      //build  up a human-readable message...
      var expectedCount = _expectedCalls.min;
      if (_expectedCalls.max != _expectedCalls.min)
          expectedCount = expectedCount + "-" + (_expectedCalls.max === NaN ? "*" : _expectedCalls.max)

      //...and throw an exception
      throw "Expected " + expectedCount  + " calls to " + member + " but had " + _calls.length;
    }
  };

  this.verify = _verify;

Callbacks

For the final feature of our Setup, we need to add the ability to register a callback that will be called when the mock method is invoked.  The Setup already gets notified through ‘called’ so we just need to add a method that registers the callback, then update ‘called’ to invoke each callback in turn.

  //notifies this setup that it has been called
  _calls = [],
  _called = function(params) {
    _calls.push(params);
    for (var i = 0; i < _callbacks.length; i++) {
      _callbacks[i].apply(this, params);
    }
  },

  //store registered callbacks
  _callbacks = [],
  _callback = function(callback) {
    _callbacks.push(callback);
  };

  this.callback = _callback;

We can now specify a callback with a nice human-readable syntax:

  mockObject
    .setup("post")
    .callback(function(url, success) {
      success(); //fake a successful post
    });

Done

…and that’s pretty much it.  It’s not a fully featured mocking library but it it does most of what I would use day-to-day, and it was an interesting project.

I can set up a fake jQuery object expecting a call to ‘post’ with a URL specified, a limit on the number of calls that should be made, a return value and a callback to invoke the success callback parameter:

mockObject
  .setup("post")
  .with("/expected/url", jsMock.constants.anything)
  .times.noMoreThan(1)
  .returns(123)
  .callback(function(url, success) {
    success(mockData);
  });

//run test method

mockObject.verify();

The code is available on Github (with some changes from the examples, which have been written for clarity) so help yourselves.  I may continue to update it if I actually end up using it!