Supporting SignalR Client Handlers after Connection Start

(Yes, that is a pretty specific post title but then this is a pretty specific problem…)

In general, when you create a new SignalR connection you are obliged to have already defined any of your handlers on the connection.yourHubName.client object. This allows SignalR to discover those handlers and hook them up to the incoming messages.

Problem: Multiple connection sources

This approach is fine as long as you have a single place from which you are starting your connection but what if you have 2 hubs, 2 separate client handlers…2 of everything?

They will both automatically share a SignalR connection so you can end up with a bit of a race condition where the first handler to start the connection will be the only handler registered.  Imagine the following handlers…

function MyFirstHandler() {
  //assign the handler
  $.connection.myHub1.client.method1= function() { ... };

  //start the connection
  $.connection.myHub1.connection.start();
}

function MySecondHandler() {
  //assign the handler
  $.connection.myHub2.client.method2= function() { ... };

  //start the connection
  $.connection.myHub2.connection.start();
}

//...some time later...
new MyFirstHandler()
//...and even later still...
new MySecondHandler()

By the time we create MySecondHandler we have already created the connection and so method2 is not attached and will never be invoked.

Solution: Proxy implementation

We can work around this by replacing the connection.yourHubName.client object (normally just a POJO) with something that is aware of the available server methods.  The new client then exposes stubs to which SignalR can connect before our MySecondHandler can provide the “real” handler implementations.

//before creating any handlers
$.connection.myHub1.client = new SignalRClient(['method1','otherHandler']);
$.connection.myHub2.client = new SignalRClient(['method2']);

The SignalRClient implementation has 3 requirements for each named handler:

  1. Always return a valid handler function for SignalR to bind, even if the real handler hasn’t been assigned yet
  2. If the real handler has been assigned, invoke that when the handler is invoked (with all args etc.)
  3. Allow client.myHandler = function(){} assignments for compatibility with existing code

The last requirement means that we need to use Object.defineProperty with custom getter and setter implementations.  The getter should always return a stub method; the setter should store the real handler; and the stub method should invoke the real handler (if assigned).

function SignalRClient(methods) {
	this._handlers = {};
	methods.forEach(this.registerHandler.bind(this));
}

SignalRClient.prototype.invokeHandler = function(name) {
	var handler = this._handlers[name];
	if (handler) {
		var handlerArgs = Array.prototype.slice.call(arguments, 1);
		handler.apply(this, handlerArgs);
	}
}

SignalRClient.prototype.registerHandler = function(name) {
	var getter = this.invokeHandler.bind(this, name);
	Object.defineProperty(this, name, {
		enumerable: true,
		get: function() { return getter },
		set: function (value) { this._handlers[name] = value; }.bind(this)
	});
}

Note that our defined properties must also be marked as enumerable so that the SignalR code picks up on them when it attempts to enumerate the client handler methods.

Now – provided we know the available methods up front – we can start the connection whenever we like and assign our handlers later!

Advertisements

Hiding ProxyApi Routes from Web API Help Pages

If you are using ProxyApi and you have tried out the Web API Help Pages feature then you will have noticed a bunch of duplicate routes showing up for all of your actions that look something like this:

GET /api/{proxy}/Controller/Action?foo=bar

ProxyApi needs to be certain of the Route-to-Controller/Action mapping in order to correctly generate the JavaScript proxies, and it achieves this by inserting a custom route at the start of the route table so that it will always take precedence (if matched).

Unfortunately the Web API ApiExplorer finds these routes, maps them to the action and generates a duplicate route for every action in your API!

Getting Rid of the Routes

Thankfully it is very simple to filter these out.  When you add the Web API help pages package to your project it will generate a LOT of code that builds and renders the help page content.  This gives you plenty of entry points in which you can intercept and hide the ProxyApi-specific routes.

For our purposes here we can subclass the ApiExplorer class and filter out any route that contains “{proxy}”.

public class CustomApiExplorer : ApiExplorer
{
  public CustomApiExplorer(HttpConfiguration config) : base(config)
  {}

  public override bool ShouldExploreAction(string actionVariableValue, HttpActionDescriptor actionDescriptor, IHttpRoute route)
  {
    if (route.RouteTemplate.ToLower().Contains("{proxy}"))
      return false;

    return base.ShouldExploreAction(actionVariableValue, actionDescriptor, route);
  }
}

Now we just need to plug this implementation in instead of the default…

//in your help page configuration
config.Services.Replace(typeof(IApiExplorer), new CustomApiExplorer(config));

…and we’re done!

Selenium: Early Thoughts on Test Automation

I have recently been running a trial of Selenium to automate some of our regression and integration testing. I have only been looking into this for a short amount of time so I am by no means an expert but this post contains a few of my observations so far.

For those of you that are not familiar with it, Selenium is a browser automation system that allows you to write integration tests to control a browser and check the response of your site. An example of a Selenium script might look like this:

  1. Open the browser
  2. Browse to the login page
  3. Enter “user 1” in the input with ID #username
  4. Enter “pa$$word” in input with ID #password
  5. Click the Login button and wait for the page to load
  6. Check that the browser has navigated to the users home page

Selenium as a framework comes in 2 flavours: IDE & WebDriver.

Selenium IDE

IDE uses a record-and-playback system to define the script and to run the tests. It is implemented as a FireFox plugin and is therefore limited to FireFox only.

We had run a previous trial using this version where we attempted to have our QA team record and execute scripts as part of functional and regression testing. We found that this had a number of problems and eventually abandoned the trial:

  • Limited to FireFox
  • Has to be run manually (i.e. Cannot be run automatically on a build server)
  • Often requires some basic understanding of JavaScript or CSS selectors to work through a problem in a script; this was sometimes beyond the technical knowledge of our QA team
  • Automatically-generated selectors are often extremely fragile. Instead of input#password, it might generate body > div.main-content > form > input:last-child. This meant that a lot of time was lost to maintenance and that the vast majority of “errors” reported by the script were actually incorrect selectors.

We decided that there we too many disadvantages with this option and so moved onto Selenium WebDriver.

Selenium WebDriver

WebDriver requires that all scripts are written in the programming language of your choice. This forced the script-writing task onto our development team instead of QA, but also meant that development best-practices could be employed to improve the quality and maintainability of the scripts.

This version of Selenium also (crucially) supports multiple browsers and can be run as part of an automated nightly build so seemed like a much better fit.

Whilst writing our first few Selenium tests we came up with a few thoughts on the structure

Use a Base Fixture for Multiple Browser Testing

This is a nice simple one – we did not want to write duplicate tests for all browsers so we made use of the Generic Test Fixture nUnit feature to automatically run our tests in the 4 browsers in which we were interested.

We created a generic base fixture class for all our tests and decorated it with the TestFixture<TDriver> attribute. This instructs nUnit to instantiate and run the class for each of the specified generic types, which in turn means any test we write in such a fixture will automatically be run against each browser

[TestFixture(typeof(ChromeDriver))]
[TestFixture(typeof(InternetExplorerDriver))]
[TestFixture(typeof(FirefoxDriver))]
public abstract class SeleniumTestFixtureBase<TWebDriver>
	where TWebDriver : IWebDriver
{
	protected IWebDriver Driver { get; private set; }

	[SetUp]
	public void CreateDriver()
	{
		this.Driver = DriverFactory.Instance
			.CreateWebDriver<TWebDriver>();
			
		//...
	}
}

This does have some disadvantages when it comes to debugging tests as there are always 4 tests with the same method name but this has only been a minor inconvenience so far – the browser can be determined from the fixture class name where needed.

Wrap Selectors in a “Page” Object

The biggest problem with our initial trial of “record and playback” automated tests was the fragility of our selectors. Tests would regularly fail when manual testing would demonstrate the feature clearly working, and this was almost always due to a subtle change in the DOM structure.

If your first reaction to a failing test is to say “the test is probably broken” then your tests are useless!

A part of the cause was that the “record” part of the feature does not always select the most sensible selector to identify the element. We assumed that by hand-picking selectors we would automatically improve the robustness (is that a word?) of our selectors, but in the case where they did change we still did not want to have to update a lot of places. Similarly, we did not want to have to work out what a selector was trying to identify when debugging tests.

Our solution to this was to create a “Page” object to wrap the selectors for each page on the site in meaningfully named methods. For example, our LoginPage class might look like this:

public class LoginPage
{
	private IWebDriver _driver;

	public LoginPage(IWebDriver driver)
	{
		_driver = driver;
	}

	public IWebElement UsernameInput()
	{
		return _driver.FindElement(By.CssSelector("#userName"));
	}

	public IWebElement PasswordInput()
	{
		return _driver.FindElement(By.CssSelector("#Password"));
	}
}

This has a number of advantages:

  • Single definition of the selector for a given DOM element
    We only ever define each element once
  • Page inheritance
    We can create base pages that expose page elements which appear on multiple pages (e.g. the main navigation or the user settings menu)
  • Creating helper methods
    When we repeat blocks of functionality (e.g. Enter [usename], enter [password] then click Submit) we are able to encapsulate them on the Page class instead of private methods within the test fixture.

We also created factory extension methods on the IWebDriver element to improve readability

public static class LoginPageFactory
{
	public static LoginPage LoginPage(this IWebDriver driver)
	{
		return new LoginPage(driver);
	}
}

//...
this.Driver.LoginPage().UsernameInput().Click()

Storing Environment Information

We decided to store our environmental variables in code to improve reuse and readability. This is only a minor point but we did not want to have any URLs, usernames or configuration options hard coded in the tests.

We structured our data so we could reference variables as below:

TestEnvironment.Users.AdminUsers[0].Username

Switching between Debug & Release Mode

By storing environment variables in code we created another problem: how to switch between running against the test environment and against the local developer environment.

We solved this by loading certain changeable elements of our configuration from .config files based on a #DEBUG flag

Other Gotchas

  • The 64bit IE driver for Selenium IDE is incredibly slow! Uninstall it and install the 32-bit one
  • Browser locale can – in most cases – be set using a flag when creating the driver. One exception to this is Safari for Windows, which does not seem to allow you to change the locale at all – even through Safari itself!

Summary

We are still in the early phases of this trial but it is looking like we will be able to make Selenium automation a significant part of our testing strategy going forward.

Hopefully these will help out other people. If you have any suggestions of your own then leave them in the comments on message me on Twitter (@stevegreatrex).

Excluding Current RouteData from UrlHelper

By default, the MVC UrlHelper will include all of the route values for the current route in it’s calculations.

This means that unless you explicitly override them you can get situations like this:

<!-- on page /Person/View/1 -->
<a href="@Url.Action("View", "Pet")">View Animal</a>
<!-- URL resolves to /Pet/View/1 -->

Disaster – the ID from the current request has been included in the new URL!

In some cases this can be very useful – this is the reason that you don’t need to specify a controller if you are already within a view on the same controller – but can be very annoying when you want to create a URL in isolation (see here and here).

Using the Isolate Extension

To get around this problem I have written an Isolate extension method that can be used as below:

<!-- on page /Person/View/1 -->
<a href="@Url.Isolate(u => u.Action("View", "Pet"))">View Animal</a>
<!-- URL resolves to /Pet/View -->

The extension works by temporarily removing all of the existing route values from the specified instance of UrlHelper, executing the action, and then re-inserting the original route values before returning the result.

public static TResult Isolate<TResult>(this UrlHelper urlHelper, Func<UrlHelper, TResult> action)
{
	var currentData = urlHelper.RequestContext.RouteData.Values.ToDictionary(kvp => kvp.Key);
	urlHelper.RequestContext.RouteData.Values.Clear();
	try
	{
		return action(urlHelper);
	}
	finally
	{
		foreach (var kvp in currentData)
			urlHelper.RequestContext.RouteData.Values.Add(kvp.Key, kvp.Value.Value)
	}
}

It’s a basic solution and there are some (predictable) scenarios where it will fall down, but it solved my immediate problem without adding to much bloat to the code.

ProxyApi & Anti-Forgery Tokens

Anti-Forgery Tokens?

Good question.  Anti-forgery tokens are a recommended way of preventing one of the OWASP Top Ten security vulnerabilities: Cross Site Request Forgery, or CSRF.

CSRF works on the basis that once you have logged into YourSite using your browser, any request to that domain will share the authentication information.  Normally, requests to YourSite would come from YourSite, but other developers are perfectly capable of writing some code on their site that tries to make a request to YourSite to do something evil.

Though there are a few ways of preventing or reducing the risk of CSRF attacks, anti-forgery tokens are the currently recommended approach.

So how do they work?  Whenever the server serves up a page that may result in a submission (e.g. a page that contains a form) it sets a randomly-generated cookie value.  The client must then include the random value in both a hidden form field and the request cookie; otherwise, the server will reject the request as invalid.  Attackers will not be able to read the cookie value; therefore they cannot include it as a form field and so their attack fails.

ASP.NET MVC Implementation

MVC makes it very easy to implement anti-forgery tokens.  Very easy.

Step 1: add an attribute to your action or controller

[ValidateAntiForgeryToken]
public ActionResult DoSomething()
{
    //…
}

Step 2: include the following within the form on the page

@Html.AntiForgeryToken()

Unfortunately WebAPI does not have a similar implementation, but there are thankfully a lot of examples out there (e.g. Kamranicus’ example & the MVC SPA template ) of how to achieve similar functionality that works with WebAPI.

So how can we adapt these ideas to work with ProxyApi?

ProxyApi Implementation

The intention of this library is to allow you to quickly create proxy classes for WebAPI methods; because it is expected to be running in the browser (it generates JavaScript, after all) it will be using cookie authentication and should therefore consider CSRF.

Ideally, the developer using the library doesn’t want to do anything more than they do for their MVC implementation, so it would seem like that is a good convention to follow.

Setting The Token

As with MVC, setting the cookie token and inserting the hidden form value onto the page is done by calling the Html.AntiForgeryToken() method in your view.  This is deliberately identical to the MVC method to keep things as consistent as possible.

Decorating the Controller

Following the same pattern as MVC and the examples listed above, the ProxyApi implementation uses an attribute that can be specified against a controller or an action:

[ValidateHttpAntiForgeryToken]
public void PostSomething(Something data)
{
    //...
}

This attribute is an extension of AuthorizationFilterAttribute that uses the cookie- and hidden tokens to validate the request.  The second value – the one that would normally be included as a hidden form field – is instead expected as a custom header value: X-RequestVerificationToken.  This approach avoids complications in combining the ProxyApi automatically-generated POST data and a custom form field.

Because WebAPI is often used for non-browser-based access, the attribute also allows you to optionally specify any types of authentication (e.g. Basic) that should be excluded from the verification process.

Passing the Hidden Token to the Server

The JavaScript implementation of the proxy objects allows you to specify either a concrete value or an accessor function to get the form field value:

$.proxies.myController.antiForgeryToken = "1234abc";

// or

$.proxies.myController.antiForgeryToken = function() { 
    return $("#someField").val();
};

By default, this function will use jQuery to locate the hidden input generated by the Html.AntiForgeryToken() method and use it’s value.

Summary

Overall, this implementation is nothing groundbreaking.  It borrows heavily from the the SPA MVC template and from other examples online but it does allow ProxyApi to prevent CSRF attacks with minimal change to the code for developers.

The source code for this is available on GitHub, and the updated package is available for download via nuget.

Exception Handling for Web API Controller Constructors

The generally-recommended best practice for exception handling within Web API is to to use exception filters.  Once registered, these classes sit in the processing pipeline for a message and can react to exceptions that are thrown by actions.

A Problem

The issue with the statement above is the qualifier “by actions”.  While an exception filter will correctly handle any errors thrown from within an action method, it will be bypassed by exceptions thrown during the creation of the controller.

These exceptions include two categories of error: exceptions thrown from within the controller constructor, and a failure to locate or invoke an appropriate constructor.  The latter problem is, for me, the more common – I use the Autofac MVC & WebAPI integrations (highly recommended, by the way) to handle dependency injection in controllers, and there are quite often scenarios where one of the dependencies is not available.  In these cases I really need a way to catch and to nicely handle those exceptions.

One way in which we can achieve this lofty aim is by creating a custom implementation of IHttpControllerActivator.

The Controller Activator

The IHttpControllerActivator interface only contains one method:

IHttpController Create(
	HttpRequestMessage request,
	HttpControllerDescriptor controllerDescriptor,
	Type controllerType
)

This method is responsible for creating and returning an instance of a specified controller before the API action is invoked.  This is perfect for our scenario because it is a very specific responsibility; we need a custom implementation, but we will not have to worry about how the controller type is selected, how the action is selected or how it is invoked.

Implementing a Decorator

To be honest, we don’t really want to get into how the controller is actually created – we just want to wrap it in a try { … } catch { … } – so instead of creating our own activator we should just write a decorator pattern to wrap the existing implementation.

public class ExceptionHandlingControllerActivator : IHttpControllerActivator
{
	private IHttpControllerActivator _concreteActivator;

	public ExceptionHandlingControllerActivator(IHttpControllerActivator concreteActivator)
	{
		_concreteActivator = concreteActivator;
	}
		
	public IHttpController Create(HttpRequestMessage request, HttpControllerDescriptor controllerDescriptor, Type controllerType)
	{
		try
		{
			return _concreteActivator.Create(request, controllerDescriptor, controllerType);
		}
		catch
		{
			//custom handler logic here
		}
	}
}

This simple class constructs on a concrete instance of IHttpControllerActivator, then calls down to that concrete instance within a try/catch block.  We can then implement our custom exception handling in the catch.

Now all we need to do is replace the default activator with our one.

Hooking It Up

We need to tell Web API to use our new controller activator instead of the default, and (as with so much else in Web API) we do this through the HttpConfiguration object; specifically, the Services property.

This comes with a convenient Replace method that allows us to insert our implementation in place of the default version.  We also want to pass that default into the constructor of our class, so we end up with something like this:

GlobalConfiguration.Configuration.Services.Replace(typeof(IHttpControllerActivator), 
	new ExceptionHandlingControllerActivator(
		GlobalConfiguration.Configuration.Services.GetHttpControllerActivator()
	)
);

It looks a little messy, but it’s not complicated: grab a reference to the current activator, pass it into our decorator, then pass that into the Replace method.

Simple!

Translating Date Formats in JavaScript

Urgh, dates.  It’s always bloody dates.  Whether it’s different time zones, localised formats (both long and short) or any one of a hundred other annoying little problems…somehow, whenever you start working with dates, everything goes to hell.

We can all agree on how we should be dealing with dates, right?  ISO 8601 for everything in the backend (transport & database), then a localised format to present to the user in the UI layer. As Randall Munroe recently said/drew:

Date Formats

That’s all well and good, but when your UI layer makes use of JavaScript you have the additional headache of dealing with the JavaScript Date object (though you can avoid real agony by using momentjs).

The Most Important Piece of (Off-Topic) Advice in this Post

If you are not already using momentjs for every date-related scenario then you are almost certainly wasting a lot of time.  It’s under 6kb, it will do more-or-less everything you want with dates…go get it now.

The problem with dates in JavaScript gets even worse if you are pulling your date format from somewhere other than the client.  Suppose that you have a “preferred date format” setting for each user on the server – you need to somehow pass that into your JavaScript.

Admittedly momentjs makes this easier: you can parse the dates from the server using ISO format and then display them in whatever crazy format you want.

var dateFromServerInIsoFormat = "2013-01-14",
    preferredFormatFromServer = "D * MMM * YYYY",
    parsed                    = moment(dateFromServerInIsoFormat, "YYYY-MM-DD"),
    formatted                 = parsed.format(preferredFormatFromServer);

console.log(formatted);
// -> 14 * Jan * 2013

This works – it will allow us to format dates within our JavaScript using a date format specified by the server – but it relies on an incorrect assumption: that the server and momentjs share a date format language.

One Format, Many Format Definitions

Perhaps the best way to demonstrate this problem is with an example.  Let’s assume that we are running ASP.NET on our server, using momentjs to handle dates within our JavaScript, and (just to spice things up) we have a couple of third party JavaScript components: jQuery UI DatepickerjqPlot.

We have a user setting stored on the server that defines our “preferred date format” as…

January 15, 2013

Let’s take a look at how we would define that for our four components.

.NET MMMM d, yyyy Nice and simple so far
momentjs MMMM D, YYYY Slightly different, but still pretty close
jQuery UI Datepicker MM d, yy Ok, getting a little bit trickier…
jqPlot %B %#d, %Y Wait, what?!

What’s going on here?!  You can just about see how you could convert from the .NET format to momentjs, or even to the jQuery UI one without too much pain, but jqPlot is just a mess!

Translating Between Formats

After being plagued by numerous variations on this problem on a recent project, I decided to deal with it properly by creating something to translate between (theoretically) any 2 formats.

The dateFormat object contains definitions for the different format languages, and the convert function translates between them:

var formatFromServer = "MMMM d, yyyy",
    momentFormat = dateFormat.convert(formatFromServer,
                                      dateFormat.dotnet,
                                      dateFormat.moment),
    jqPlotFormat = dateFormat.convert(formatFromServer,
                                      dateFormat.dotnet,
                                      dateFormat.jqplot);

console.log("moment: " + momentFormat);
// -> moment: MMMM D, YYYY

console.log("jqPlot: " + jqPlotFormat);
// -> jqPlot: %B %#d, %Y

Each language definition contains a list of mappings from named tokens (e.g. day-of-week) to their representation in the format:

dateFormat.dotnet = {
    "day-of-month-1": "d",
    "day-of-month-2": "dd",
    "day-of-week-abbr": "ddd",
    "day-of-week": "dddd"
    //...
};

This allows the convert function to create a mapping from one language to another, then use regular expressions to locate and replace matching tokens in the source string.

The source code is available on GitHub along with the tests and some distributables.  At the moment the library only supports 3 formats (.NET, momentjs and jqplot) as these were the 3 I needed most recently, but that list will expand over time – hopefully with some help from the community.

This jsFiddle has a running example of the conversion so feel free to play around, or grab the source yourself from one of the links below.

That’s one fewer headache when you next find yourself stuck working with dates!