ProxyApi & Anti-Forgery Tokens

Anti-Forgery Tokens?

Good question.  Anti-forgery tokens are a recommended way of preventing one of the OWASP Top Ten security vulnerabilities: Cross Site Request Forgery, or CSRF.

CSRF works on the basis that once you have logged into YourSite using your browser, any request to that domain will share the authentication information.  Normally, requests to YourSite would come from YourSite, but other developers are perfectly capable of writing some code on their site that tries to make a request to YourSite to do something evil.

Though there are a few ways of preventing or reducing the risk of CSRF attacks, anti-forgery tokens are the currently recommended approach.

So how do they work?  Whenever the server serves up a page that may result in a submission (e.g. a page that contains a form) it sets a randomly-generated cookie value.  The client must then include the random value in both a hidden form field and the request cookie; otherwise, the server will reject the request as invalid.  Attackers will not be able to read the cookie value; therefore they cannot include it as a form field and so their attack fails.

ASP.NET MVC Implementation

MVC makes it very easy to implement anti-forgery tokens.  Very easy.

Step 1: add an attribute to your action or controller

[ValidateAntiForgeryToken]
public ActionResult DoSomething()
{
    //…
}

Step 2: include the following within the form on the page

@Html.AntiForgeryToken()

Unfortunately WebAPI does not have a similar implementation, but there are thankfully a lot of examples out there (e.g. Kamranicus’ example & the MVC SPA template ) of how to achieve similar functionality that works with WebAPI.

So how can we adapt these ideas to work with ProxyApi?

ProxyApi Implementation

The intention of this library is to allow you to quickly create proxy classes for WebAPI methods; because it is expected to be running in the browser (it generates JavaScript, after all) it will be using cookie authentication and should therefore consider CSRF.

Ideally, the developer using the library doesn’t want to do anything more than they do for their MVC implementation, so it would seem like that is a good convention to follow.

Setting The Token

As with MVC, setting the cookie token and inserting the hidden form value onto the page is done by calling the Html.AntiForgeryToken() method in your view.  This is deliberately identical to the MVC method to keep things as consistent as possible.

Decorating the Controller

Following the same pattern as MVC and the examples listed above, the ProxyApi implementation uses an attribute that can be specified against a controller or an action:

[ValidateHttpAntiForgeryToken]
public void PostSomething(Something data)
{
    //...
}

This attribute is an extension of AuthorizationFilterAttribute that uses the cookie- and hidden tokens to validate the request.  The second value – the one that would normally be included as a hidden form field – is instead expected as a custom header value: X-RequestVerificationToken.  This approach avoids complications in combining the ProxyApi automatically-generated POST data and a custom form field.

Because WebAPI is often used for non-browser-based access, the attribute also allows you to optionally specify any types of authentication (e.g. Basic) that should be excluded from the verification process.

Passing the Hidden Token to the Server

The JavaScript implementation of the proxy objects allows you to specify either a concrete value or an accessor function to get the form field value:

$.proxies.myController.antiForgeryToken = "1234abc";

// or

$.proxies.myController.antiForgeryToken = function() { 
    return $("#someField").val();
};

By default, this function will use jQuery to locate the hidden input generated by the Html.AntiForgeryToken() method and use it’s value.

Summary

Overall, this implementation is nothing groundbreaking.  It borrows heavily from the the SPA MVC template and from other examples online but it does allow ProxyApi to prevent CSRF attacks with minimal change to the code for developers.

The source code for this is available on GitHub, and the updated package is available for download via nuget.

Advertisements

Fallback Images with Knockout

After a busy few weeks at work I’ve finally managed to spend some time on knockout development again, and today I found a nice solution to a problem with data-bound images.

In my example I had a list of contacts that were being displayed on the page, and each contact had a URL linking to their profile image.

{
	Id: 1,
	FirstName: "Steve",
	LastName: "Greatrex",
	ProfileImage: "/some/image.jpg"
}

The binding to display the image is initially pretty simple – I can use the attr binding to set the src property on the img element using the vanilla knockout library.

<img data-bind="attr: { src: ProfileImage }" />

Simple enough, right?

Complications

Unfortunately some of my contacts don’t have a ProfileImage property.  Even worse, some of them do have a ProfileImage but it points to a non-existant image.

If I use the attr binding as above then I get an unpleasant looking “missing image” icon…

image

…when what I really want to do is use a placeholder image instead.

The img binding

To resolve this problem I created a new custom binding named img.  This expects a parameter containing an observable src property for the image URL, and either a hard-coded or an observable fallback URL to be used in case of null URLs or errors.

<img data-bind="img: { src: ProfileImage, fallback: '/images/generic-profile.png' }" />

The binding itself is nothing complicated.  As with all custom bindings you can optionally specify an update and an init handler that are called when the value changes and when the binding is initialised respectively.

For this binding, the update handler needs to check the value of both the src and fallback properties, then set the src attribute on the img to whichever value is appropriate.

The only thing that the init function needs to handle is the scenario where the image fails to load (using jQuery.error).

ko.bindingHandlers.img = {
    update: function (element, valueAccessor) {
        //grab the value of the parameters, making sure to unwrap anything that could be observable
        var value    = ko.utils.unwrapObservable(valueAccessor()),
            src      = ko.utils.unwrapObservable(value.src),
            fallback = ko.utils.unwrapObservable(value.fallback),
            $element = $(element);

        //now set the src attribute to either the bound or the fallback value
        if (src) {
            $element.attr("src", src);
        } else {
            $element.attr("src", fallback);
        }
    },
    init: function (element, valueAccessor) {
        var $element = $(element);

        //hook up error handling that will unwrap and set the fallback value
        $element.error(function () {
            var value = ko.utils.unwrapObservable(valueAccessor()),
                fallback = ko.utils.unwrapObservable(value.fallback);

            $element.attr("src", fallback);
        });
    },
};

That’s all there is to it – you can now specify a “primary” and a “fallback” binding for your images to get something like the effect below:

image

Another problem solved by the Swiss army knife that is knockout custom bindings.

Single Page Applications using Node & Knockout

This post is going to be a short walkthrough on how to use Node and KnockoutJS to create a simple single page application.

What is a Single Page Application?

…a web application or web site that fits on a single web page with the goal of providing a more fluid user experience akin to a desktop application

That’s according to Wikipedia.  For the purposes of this post, a single page application (or SPA) will mean a web application for which we only want to serve up one HTML page.

That page will then link to a couple of javascript files which, in cohort with a templating engine, will create and manipulate the content of the page.  All communication with the server will be through AJAX, and will only ever transfer JSON data – no UI content.

We will be using node to serve the page (plus scripts, styles, etc.) and to handle the API calls, while knockout will provide us with the client-side interaction and templating.

Serving the Single Page

First up: we need to configure node to serve up our single HTML page:

<html>
	<body>
		<h1>I'm a single page application!</h1>
	</body>
</html>

We’ll be adding more to that later, but let’s get node working first.

Express

We’re going to be using expressjs to implement the more interesting API calls later on, but we can make use of it here to serve up a static file for us as well.  To install express, use the node package manager by running the command below.:

npm install express

Now we need to create a javascript file – app.js – to run in node.  This file will grab a reference to express using the require function and will start listening on port 3000.

var express = require("express"),
	app = express();

//start listening on port 3000
app.listen(3000);

Let’s see what happens when we run this.  In a command prompt, browse to the folder containing app.js and enter the command below to start node.

node app.js

Next, open your favourite browser and navigate to http://localhost:3000/index.html.  You should see something like this:

cannot-get

This is express telling us that it cannot resolve the URL “/index.html”, which isn’t unreasonable – we haven’t told it how to yet.  We want express to respond with the contents of static files from the current folder (eventually including our styles and javascript), so let’s get that set up.

We do this in the app.configure method (before we call app.listen) using the express.static method and the current application folder (stored in the special __dirname node variable).

app.configure(function() {
	//tell express to serve static files from the special
	//node variable __dirname which contains the current
	//folder
	app.use(express.static(__dirname));
});

Restart the node application, refresh the browser and you should now see the content from our single page:

can-get

Conveniently, express will automatically return index.html if you don’t specify a file name, so we can get the same response from http://localhost:3000/

Creating the Page

The next step is to start building up content in the page.  We are going to need a few javascript resources – jQuery for the AJAX calls, Knockout for the view model – and I’m going to include my command pattern implementation to help with the actions.

For the page itself I’m going to pull in a page.js to contain our page-specific code, and we should probably include a stylesheet as I can’t stand Times New Roman.

Our HTML page now looks like this:

<html>
	<head>
		<title>SPA Example</title>
		<link rel="stylesheet" href="spa.css" />
	</head>
	<body>
		<h1>I'm a single page application</h1>
	</body>
	<script src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.9.0.min.js"></script>
	<script src="http://ajax.aspnetcdn.com/ajax/knockout/knockout-2.2.1.js"></script>
	<script src="https://raw.github.com/stevegreatrex/JsUtils/master/JsUtils/utils.min.js"></script>
	<script src="page.js"></script>
</html>

I’m using a CDN for jQuery and knockout, and I’m pulling my command implementation direct from Github (sorry Github!). I’m assuming that both spa.css and page.js are in the same folder as index.html

Refresh the browser again (no need to restart node this time) and…
styled
Much better!

Creating the View Model

As this is just a sample application I don’t want to get too distracted by the view model – the purpose of this post is demonstrate the end-to-end rather than to focus on a specific functionality.  With that in mind, let’s use the example functionality of a basic todo app (as that seems to be the thing to do).

Our view model will start off with a list of todo items which we will store in a knockout observableArray.  Each todo item will have a name and a complete flag.

For the time being, we’ll bootstrap the collection with a few sample items.

var TodoViewModel = function(data) {
	this.name = ko.observable(data.name);
	this.complete = ko.observable(data.complete);
};

var TodoListViewModel = function() {
	this.todoItems = ko.observableArray();
};

$(function() {
	var viewModel = new TodoListViewModel();

	//insert some fake todo items for now...
	viewModel.todoItems.push(new TodoViewModel({ name: "Pending Item", complete: false }));
	viewModel.todoItems.push(new TodoViewModel({ name: "Completed Item", complete: true }));

	ko.applyBindings(viewModel);
});

The view model is now being populated but there’s still nothing to see in our view – we need to add some HTML and start working with the knockout templating engine to get things to display.

Displaying Items using Knockout Templating

With knockout, the UI is data bound to the view model in order to generate HTML.  http://knockoutjs.com/ has a wealth of documentation and examples on how to achieve this, but for this example we are going to use three bindings: foreach to iterate through each of the todo list items; text to display the name; and checked to display the completed state.

<ul data-bind="foreach: todoItems">
	<li>
		<span data-bind="text: name"></span>
		<input type="checkbox" data-bind="checked: complete" />
	</li>
</ul>

Refresh the page in a browser and you should now see something like this:

items

We now have the text and the completed state of the our two fake todo items.  That’s all well and good, but what about when you want to get real data from the server?

Getting Real Data from the Server

In a single page application, data is acquired from the server using AJAX calls and our example today will be no different.  Unfortunately, our server doesn’t support any AJAX calls at the moment, so our next step is to configure a URL that will return some data; in this case: todo list items.

Configuring API Calls using Express

We want to set up an API call on our node server that will respond with a JSON list of todo items for the URL:

GET /api/todos

To set this up in express we use the app.get method, which accepts a path as the first parameter – in this case /api/todos – and a callback as the second.

app.get("/api/todos", function(req, res) {
	//...
});

The callback will now be invoked whenever we browse to http://localhost:3000/api/todos.  The two parameters on the callback are the request and the response objects, and we now want to use the latter to send JSON data back to the client.

Ordinarily you would be getting the data from some kind of backing store, but to keep things simple I’m just going to return a few fake items using the res.json method.  Here we are passing in the HTTP response code (200 – OK) and our data, then calling the res.end method to finish the response.

res.json(200, [
	{ name: "Item 1 from server", complete: false },
	{ name: "Item 2 from server", complete: false },
	{ name: "Completed Item from server", complete: true }
]);
res.end();

Now let’s hook up our view model to access that data…

Getting the View Model Speaking to the Server

As our server now expects a GET call we can use jQuery.getJSON to load the data from the client side.  Once we have the data, all we need to do is push it into our view model to update the UI.

var TodoListViewModel = function() {
	var self = this;
	this.todoItems = ko.observableArray();

	this.refresh = ko.command(function() {
		//make a call to the server...
		return $.getJSON("/api/todos");
	}).done(function(items) {
		//...and update the todoItems collection when the call returns
		var newItems = [];
		for (var i=0; i < items.length; i++ ){
			newItems.push(new TodoViewModel(items[i]));
		}
		self.todoItems(newItems);
	});

	//refresh immediately to load initial data
	this.refresh();
};

Note that I’ve used the command pattern in this example (to get some free loading indicators and error handling) but there’s no need to do so – a regular function would suffice.

Restart node, refresh the page and you should now see the data returned from the server.

server-items

Sending Data back to the Server

We’ve managed to display data from the server, but what about if we want to save a change from the client?

Let’s add another API method that expects a PUT call to /api/todos/[id] with a body containing the JSON data.  We’ll also need to add an id property to the fake data returned by the server so that we can reference it in the URL.

The configuration of the PUT URL looks very similar to the GET configuration from earlier.

app.put("/api/todos/:todoId", function(req, res) {
    //...
});

The only difference (besides the verb) is that our URL path now includes a parameter named “todoId”, signified by the prefixed colon.  This will allow us to access the value of the ID appended to the URL through the req.params object.

Our handler will also need access to the body of the request, and to provide that we need to configure express to use its body parser:

app.use(express.bodyParser());

Now we have access to the body of the request through the req.body property.

As our server doesn’t have a real backing store, there isn’t much we can do to actually process this call.  To demonstrate that it is actually getting through we’ll just log the details to the node console and respond with a 200 - OK for the time being.

app.put("/api/todos/:todoId", function(req, res) {
	console.log(req.params.todoId + ": " + JSON.stringify(req.body, null, 4));
	res.send(200);
	res.end();
});

We now need our view model to call this method whenever the value of the complete flag is updated by the user.  First off, lets add another command that uses jQuery to make an AJAX call with the appropriate data.

var TodoViewModel = function(data) {
	// as before

	this.updateServer = ko.command(function() {
		return $.ajax({
			url: "/api/todos/" + data.id,
			type: "PUT",
			contentType: "application/json",
			data: JSON.stringify({
				id: data.id,
				name: self.name(),
				complete: self.complete()
			})
		});
	});
};

This one is a bit more verbose than the getJSON call earlier as we need to call the jQuery.ajax method directly in order to PUT data.  It is also worth noting that the JSON object being sent is derived from the updated values for the name and complete fields from the relevant observables.

We can now use the subscribe method on the observable complete flag to ensure that this update function will be automatically invoked whenever the flag changes.

this.complete.subscribe(this.updateServer);

Restart node, refresh the page, and try clicking on the check boxes.  You should see confirmation of the successful call to the server in the node window.

server-output

Wrapping Up

This has only been a very simple example, but hopefully demonstrates the architecture of a single page application and how it can be implemented using node and knockout.

ProxyApi: Now With Intellisense!

After announcing ProxyApi in my last post I had a few people suggest that it would be more useful if it included some kind of intellisense.

So…now it does! Install the new ProxyApi.Intellisense NuGet package and you will automatically have intellisense for the generated JavaScript API objects.

I’ve made this into a separate package for 2 reasons:

  1. The original ProxyApi package still works perfectly on it’s own; and
  2. The intellisense implementation is a little but more intrusive than I would have liked

It works by adding a T4 template to the Scripts directory of your project that uses the ProxyApi classes to generate a temporary version of the script at design-time. That scripts is then added to _references.js so it gets referenced for any JavaScript file in the solution.

This would be fine, but unfortunately Visual Studio doesn’t have any mechanism for regenerating the T4 template automatically, meaning that changes to the API or MVC controllers wouldn’t be reflected until you either manually rebuilt the templates or restarted VS. For the time being I have worked around this used a simple Powershell script to re-evaluate all T4 templates after each build, but hopefully I can find a more elegant solution later.

Because this does add a slight performance penalty, and because not everyone would need intellisense support, I’ve left this as an extra package. If you prefer the vanilla ProxyApi then you can grab it here.

The next step will be generating TypeScript files using a similar mechanism, which would allow the intellisense to extend to the parameter types as well.

Watch this space…

ProxyApi: Automatic JavaScript Proxies for WebAPI and MVC

Taking Inspiration from SignalR

One of my favourite features of SignalR is the the automatic generation of JavaScript proxies for hub methods. By adding a hub class in C#…

//server
public class ExampleHub : Hub
{
	public void SendMessage(string message)
	{
		//do something with message
	}
}

…you can get the JavaScript wrapper for that method just by adding a reference to /signalr/hubs:

//client
$(function () {
    var hub = $.connection.exampleHub;

    $.connection.hub.start(function() {
        hub.sendMessage("a message from the client");
    });
});

This is such a useful feature that it has even been suggested as an alternative to Web API controllers.

But why should we use hubs as a replacement for MVC or Web API controllers? Why can’t we instead write similar functionality to work with controllers?

Introducting ProxyApi

ProxyApi is a small NuGet package that automatically generates JavaScript proxy objects for your MVC and Web API controller methods.

PM> Install-Package ProxyApi

(as an aside, this was my first attempt at creating a NuGet package and it was embarrassingly simple).

Once you’ve installed the NuGet package you just need to add a link to ~/api/proxies and the JavaScript API classes will be automatically created.

So what do these actually give you?

API Controllers

Let’s say you have started a new MVC4 project and you add a new “API controller with empty read/write actions”:

public class DataController : ApiController
{
    // GET api/data
    public IEnumerable<string> Get()
    {
        return new string[] { "value1", "value2" };
    }

    // GET api/data/5
    public string Get(int id)
    {
        return "value";
    }

    // POST api/data
    public void Post([FromBody]string value)
    {
    }

    // PUT api/data/5
    public void Put(int id, [FromBody]string value)
    {
    }

    // DELETE api/data/5
    public void Delete(int id)
    {
    }
}

Ordinarily if you wanted to call these methods from JavaScript you would need to write something like the example below using jQuery:

$.post("/Data/123", function(data) {
    //do something with the result
});

This is actually pretty concise, but I have 2 problems with this approach:

  1. The code calling this method knows that it is making a POST call. What do I do when I want to switch my data source to local storage, or some other data accessor?
  2. The code knows about the URL.

Instead, what I would prefer is a JavaScript proxy object on which I can call a method – passing in appropriate parameters – without ever knowing where that method gets it’s data or how it does so. And this is what ProxyApi provides.

Add a new script tag to _Layout.cshtml to ~/api/proxies (after the jQuery reference) and you can start directly calling API methods without writing another line of code yourself:


$.proxies.data.get().done(function(allItems) {
  //allItems will contain ['value1', 'value2']
});

$.proxies.data.get(123).done(function(item) {
  //item will be 'value'
});

//will send 'value' to Post method on controller
$.proxies.data.post("value");

//will send id=1, value='value' to Put method on controller
$.proxies.data.put(123, "value");

//will send 123 to Delete method on controller
$.proxies.data.put(123, "value");

These proxy objects can now be passed to any code that needs to perform data access without ever exposing how that data is sourced. You can easily mock them for unit testing, replace them if needed, and call them without needing to know where the website is hosted.

Complex Types

This is all well and good for simple data types like the strings in the example above, but what about when you want to manipulate complex types?

public class Person
{
	public int Id { get; set; }
	public string FirstName { get; set; }
	public string LastName { get; set; }
}

public class DataController : ApiController
{
    [HttpPost]
    public void UpdatePerson(Person value)
    {
    }
}

In this case, just pass a JSON object to the generated method:

$.proxies.data.updatePerson({
    Id: 123,
    FirstName: 'Steve',
    LastName: 'Greatrex'
});

This will send the JSON object as POST data to the Post method on the controller.

And when you have both URL and body data, such as in the auto-generated Put method? Just decorate the parameters with [FromBody] or [FromUri] and the rest will be taken care of:

public void Put([FromUri]int id, [FromBody]Person value)
{
}

Note: it generally isn’t necessary to use [FromUri] as ProxyApi will assume that anything is a URL parameter unless told otherwise. The only exception to this is for POST methods that take a single parameter, which will be assumed to be POSTed.

Non-Conventional Method Names

All the examples so far have been using conventionally-named methods, but there is no requirement for this: any method name will work:

public class DataController : ApiController
{
	public void DoSomething(int id)
	{
		// --> $.proxies.data.dosomething(123) (GET)
	}

	[HttpPost]
	public void DoSomethingElse(Person person)
	{
		// --> $.proxies.data.dosomethingelse({ ... }) (POST)
	}
}

Appropriate HTTP verbs will be used for any method based on the following rules (in priority order):

  • [Http*] attribute (e.g. [HttpPost], [HttpGet] etc.)
  • [AcceptVerbs(...)] attribute
  • Method naming conventions, e.g. DeletePerson() == DELETE
  • GET for everything else

You can also specify custom names for both the proxy objects and methods using the [ProxyName] attribute.

[ProxyName("custom")]
public class DataController : ApiController
{
	[ProxyName("method")]
	public void DoSomething(int id)
	{
		// --> $.proxies.custom.method()
	}
}

Excluding and Including Elements

By default, ProxyApi will automatically include every method in every type in the current AppDomain that inherits from either System.Web.Mvc.Controller or System.Web.Http.ApiController. You can change this behaviour to exclude everything by default by changing the ProxyGeneratorConfiguration.Default.InclusionRule property:

ProxyGeneratorConfiguration.Default.InclusionRule = InclusionRule.ExcludeAll;

You can also explicitly include or exclude any element by decorating it with one of the [ProxyInclude] or [ProxyExclude] attributes:


[ProxyExclude] //excludes entire controller
public class ExcludedController : ApiController
{}

[ProxyInclude] //includes entire controller and all methods...
public class IncludedController : ApiController
{
    [ProxyExclude] //...except for explicitly excluded methods
    public void ExcludedMethod() {}
}

public class DefaultController : ApiController
{
    [ProxyExclude] //will always be excluded
    public void ExcludedMethod() {}

    [ProxyInclude] //will always be included
    public void IncludedMethod() {}

    //will be included or excluded based on the globally configured default
    public void DefaultMethod() {}
}

Returning Data & Handling Errors

The generated proxy methods all return an instance of the jQuery $.Deferred object, so you can use the done, fail and complete methods to handle the results from the controller acctions:

$.proxies.person.getAllPeople()
    .done(function(people) {
        //people will contain return value of PersonController.GetAllPeople(), if it succeeds
    })
    .fail(function(err) {
        //this will be called if the controller throws an exception
        //err contains exception details
    })
    .complete(function() {
        //this will be called after success or failure
    });

You can get more information on how to use the jQuery Deferred object from the documentation

MVC Controllers

The examples above are all based around Web API, but everything will work with MVC controllers as well:

$.proxies.home.index().done(function(content) {
    //content will contain HTML from /Home/Index
});

Source Code

I’ve put the source code (including unit tests) on GitHub so feel free to take a look around and make any changes you think are useful.

This is an early version and will probably change quite quickly, so keep an eye out for new developments. If you have any feature suggestions then leave them in the comments (or fork and write them yourself)!

Publish & Subscribe Distributed Events in JavaScript

Having recently spent some time working in WPF withthe fantastic Composite Application Block (or Prism), I thought I would try bringing one of the more useful features over to JavaScript.

Distributed Events

Distributed events within Prism allow you to define a named event that can be published or subscribed to without the need to have a reference to any other object that depends on the event.

CompositePresentationEvent<string> anEvent; //injected somehow

//publish data anywhere in the application
anEvent.Publish("some event data");

//and consume it anywhere.  You just need a reference to the event
anEvent.Subscribe(data => MessageBox.Show(data));

When you are writing a large-scale application this is extremely useful, as it allows very loose coupling between components: if an object is interested in the current selected date then it just subscribes to the DateChanged event; it doesn’t care where the event is raised from.

Compare this to the traditional event subscription mechanism within .NET – where you need to know the parent object to subscribe – and it is easy to see that this method scales better as a system grows.

Brining Distributed Events to JavaScript

Given the different natures of web and application development I have not felt too strong a need to pull this functionality over into my JavaScript development, but as I work on larger and more modular single page web applications I am beginning to see a use for them.

So what are the requirements here? I want to be able to

  • subscribe to an event without knowing where the change came from
  • publish an event from multiple sources
  • subscribe to an event in multiple locations
  • acquire and refer to events by name*

*in Prism I would generally use a type to refer to an event, but we don’t have types so we’ll use names instead

//publish the current time to the "datechanged" event
$.publish("datechanged", new Date());

//and consume changes to the date anywhere in the application
$.subscribe("datechanged", function(date) {
    alert(date);
});

Ideally I would also like to add a couple of extra features:

  • Async Invocation – the publisher should (optionally) not have to wait for the subscribers to finish processing the event
  • Stateful Events – the subscriber should be able to subscribe after a publication and still receive the details

Implementation

Let’s start off with the subscription, as that will dictate how publication works.

Storing Subscriptions

The first thing is to be able to store multiple subscribers against a named event, and the simplest way to do that is to use an object with array properties:

//create an events object to store name -> event mappings
var events = {},

	//and use a function to create singleton event objects as needed
	getEvent = function(eventName) {
		if (!events[eventName]) {
			events[eventName] = {
				subscribers: []
			};
		}
		
		return events[eventName];
	};

Here we have a getEvent method that will check to see if the named event already exists, and will create an empty one if needed.

Note: I’m using an object with a subscribers array property (instead of just the array itself) so that we can store a bit of metadata alongside the subscriber list later.

The subscribe method then becomes:

$.subscribe = function (event, callback) {
	var subscription;

	if (typeof callback === "function") {
		subscription = { callback: callback };
	} else {
		subscription = callback
		if (!subscription.callback) {
			throw "Callback was not specified on options";
		}
	}
	
	getEvent(event).subscriptions.push(subscription);
};

This creates a subscription object containing the callback function (again, using an object to allow some metadata storage later), then uses the getEvent method from earlier to acquire or create the event object and append to the list of subscribers.

We’re allowing this to be called with either a callback function or an options object as the second parameter, so that users that don’t want to specify extra options can use a less verbose syntax.

Publishing Events

Now that we have a list of subscribers attached to each event it is simple enough to write the publish implementation: all we need to do is find the list of subscribers for the event and invoke the callback on each.

$.publish = window.Utils.publish = function (eventName, data) {
	var subscriptions = getEvent(eventName).subscriptions;
	
	for (var i = 0; i < subscriptions.length; i++) {
		(function (subscription, data) {
			subscription.callback.call(null, data);
		}(subscriptions[i], data));
	}
};

Supporting Async Events

Quite often, the object sourcing the event doesn’t need to wait on the objects that are listening to events. One of the benefits of loose coupling like this is that producers can ignore the actions of consumers, but at the moment our implementation will cause the publishing object to block until all of the subscribers have finished processing…which could take a while.

To work around this problem we can allow each subscriber to specify whether they want their event to be processed synchronously or asynchronously. With JavaScript being single-threaded (sorta) this means something slightly different to what it would in a WPF application, but the important part is to avoid blocking the publisher.

We can use setTimeout with a low value (or the minimum value) in our publish implementation to allow the publisher to continue processing uninterrupted, and then for our event handler to execute once it is completed.

if (subscription.async) {
	setTimeout(function () {
		subscription.callback.call(null, data);
	}, 4);
} else {
	subscription.callback.call(null, data);
}

Here we are determining whether or not to use async processing based on a flag on the subscription, and as we allowed an options object to be passed into our subscribe function we don’t need any changes there:

$.subscribe("event", {
  async: true,
  callback: function() { /*...*/ }
});

You can see the difference in behaviour between sync and async event handlers in this jsFiddle example.

Stateful Events

Perhaps “stateful” isn’t the best name for this concept, but it makes sense to me as the event is aware of it’s last publication, so it has a state.

The use case for this feature is where a subscriber relies on a piece of information being published, but it cannot guarantee that it will subscribe before that publication.

The implementation is simple enough: take a copy of the event payload on each publish…

getEvent(event).lastPayload = data;

…and then serve it up as if it were a new publication whenever a subscriber requests to be ‘stateful’ in the subscribe method…

if (subscription.stateful) {
	subscription.callback.call(null, getEvent(event).lastPayload);
}

As with the async implementation, we are already allowing users to specify an options object when they subscribe so there’s no need for any further changes.

//publish some important information
$.publish("eventname", "some data");

//...wait a while...

//then later, when something else subscribes
$.subscribe("eventname", function(data) {
    //this will be called immediately with "some data"
});

Source & Download

I’ve packaged this up alongside my various KnockoutJS utilities (available here) – which have also had a bit of cleaning up in the last week – but as this doesn’t rely on the Knockout library you can grab a separate copy here.

Command Pattern with jQuery.Deferred & Knockout

Update: this feature is now available as part of the ko.plus library available on GitHub and NuGet!


The command pattern is a design pattern that encapsulates all the information required to perform an operation in a new object, allowing that operation to be performed later.  Working in WPF using the MVVM pattern it is almost impossible to get away from commands and the ICommand interface, so when I started writing view models in knockout that had to perform actions I started to miss the commands quite quickly.

Whenever I wanted to do something simple, like make an AJAX call…

var ViewModel = function () {
	this.doSomethingOnTheServer = function () {
		$.ajax(/*...*/);
	}
};

…I would decide to notify the user that the operation was processing…

var ViewModel = function () {
	var _self = this;
	this.isRunning = ko.observable(false);
	this.doSomethingOnTheServer = function () {
		_self.isRunning(true);
		$.ajax(/*...*/)
		.always(function() { _self.isRunning(false); });
	}
};

…and then to notify them if there was an error…

var ViewModel = function () {
	var _self = this;
	this.isRunning = ko.observable(false);
	this.errorMessage = ko.observable();
	this.doSomethingOnTheServer = function () {
		_self.isRunning(true);
		_self.errorMessage("");
		$.ajax(/*...*/)
		    .always(function () { _self.isRunning(false); })
		    .fail(function (_, message) { _self.errorMessage(message); });
	}
};

…and before long my view model was becoming unmanageably large.

Enter the Command

Instead of writing a view model a thousand lines long I decided to encapsulate all of that boilerplate code in a nice new object: Command

var ViewModel = function () {
	this.doSomethingOnTheServer = new Command({
		action: function () {
			return $.ajax(/*...*/);
		},
		done: function (data) {
			//...
		}
	});
};

var vm = new ViewModel();
vm.doSomethingOnTheServer.execute();

Note: because my commands in knockout are invariably AJAX I have made it a requirement that the ‘action’ of the command always return a jQuery.Deferred object.

So what are we doing here?

Notification Properties

Our view model needs to have 2 properties to store the status of the operation: isRunning and errorMessage. I could add a hasError flag for completeness, but the absence of an error message can be used to infer the absence of an error.

We can create these using normal knockout observable properties:

var Command = function () {
	var _self = this,

	//flag to indicate that the operation is running
	_isRunning = ko.observable(false),

	//property to save the error message
	_errorMessage = ko.observable();

	//public properties
	this.isRunning = _isRunning;
	this.errorMessage = _errorMessage;
};

The Action

When we create a command we will need to specify the action that will be performed. Let’s pass this in as a constructor parameter, and throw an error nice and early if no action has been set:

var Command = function (options) {
	//check an action was specified
	if (!options.action) throw "No action was specified in the options";

	//... rest unchanged ...

};

Now that we have an action we can start to implement the execute method that will do the work. This method needs to:

  1. Set isRunning to true and clear any old error message
  2. Invoke the action from the constructor options
  3. Check that the action function has returned a Deferred object, and attach appropriate event handlers:
    • Always set isRunning back to false
    • If the operation failed, set the errorMessage property
var Command = function (options) {
	//...
	var _execute = function () {
		//notify that we are running and clear any existing error message
		_isRunning(true);
		_errorMessage("");

		//invoke the action and get a reference to the deferred object
		var promise = options.action.apply(this, arguments);

		//check that the returned object *is* a deferred object
		if (!promise || !promise.done || !promise.always || !promise.fail)
			throw "Specified action did not return a promise";

		//set up our callbacks:
		promise
		//always notify that the operation is complete
			.always(function () { _isRunning(false); })
		//save the error message if there is one
			.fail(function (_, message) { _errorMessage(message); });
	};

	//...
	this.execute = _execute;
};

Note: I am using apply to call the action method (instead of calling it directly) as it allows us to pass parameters if needed.

Completed Handlers

So far so good, but it’s rare that you don’t want to do something more than just notify the user when an operation completes. Let’s add the ability to pass in success and failure event handlers on the constructor options:

var Command = function (options) {
	var _execute = function () {
		//...as before...

		//attach any success or failure handlers
		if (options.done) promise.done(options.done);
		if (options.fail) promise.fail(options.fail);
	};
};

Note: as we are using the jQuery Deferred object to attach the event handlers they will automatically be passed any relevant arguments (e.g. AJAX data, error messages etc) so we don’t have to do any extra work here.

Fin

And that’s it. The full source for the Command is:

var Command = function (options) {
	//check an action was specified
	if (!options) throw "No options were specified";
	if (!options.action) throw "No action was specified in the options";

	var _self = this,

	//flag to indicate that the operation is running
	_isRunning = ko.observable(false),

	//property to save the error message
	_errorMessage = ko.observable(),

	//execute function
	_execute = function () {
		//notify that we are running and clear any existing error message
		_isRunning(true);
		_errorMessage("");

		//invoke the action and get a reference to the deferred object
		var promise = options.action.apply(this, arguments);

		//check that the returned object *is* a deferred object
		if (!promise || !promise.done || !promise.always || !promise.fail)
			throw "Specified action did not return a promise";

		//set up our callbacks
		promise
		//always notify that the operation is complete
			.always(function () { _isRunning(false); })
		//save the error message if there is one
			.fail(function (_, message) { _errorMessage(message); });

		//attach any success or failure handlers
		if (options.done) promise.done(options.done);
		if (options.fail) promise.fail(options.fail);
	};

	//public properties
	this.isRunning = _isRunning;
	this.errorMessage = _errorMessage;
	this.execute = _execute;
};

The source is also available on github along with unit tests