Hiding ProxyApi Routes from Web API Help Pages

If you are using ProxyApi and you have tried out the Web API Help Pages feature then you will have noticed a bunch of duplicate routes showing up for all of your actions that look something like this:

GET /api/{proxy}/Controller/Action?foo=bar

ProxyApi needs to be certain of the Route-to-Controller/Action mapping in order to correctly generate the JavaScript proxies, and it achieves this by inserting a custom route at the start of the route table so that it will always take precedence (if matched).

Unfortunately the Web API ApiExplorer finds these routes, maps them to the action and generates a duplicate route for every action in your API!

Getting Rid of the Routes

Thankfully it is very simple to filter these out.  When you add the Web API help pages package to your project it will generate a LOT of code that builds and renders the help page content.  This gives you plenty of entry points in which you can intercept and hide the ProxyApi-specific routes.

For our purposes here we can subclass the ApiExplorer class and filter out any route that contains “{proxy}”.

public class CustomApiExplorer : ApiExplorer
{
  public CustomApiExplorer(HttpConfiguration config) : base(config)
  {}

  public override bool ShouldExploreAction(string actionVariableValue, HttpActionDescriptor actionDescriptor, IHttpRoute route)
  {
    if (route.RouteTemplate.ToLower().Contains("{proxy}"))
      return false;

    return base.ShouldExploreAction(actionVariableValue, actionDescriptor, route);
  }
}

Now we just need to plug this implementation in instead of the default…

//in your help page configuration
config.Services.Replace(typeof(IApiExplorer), new CustomApiExplorer(config));

…and we’re done!

Advertisements

Excluding Current RouteData from UrlHelper

By default, the MVC UrlHelper will include all of the route values for the current route in it’s calculations.

This means that unless you explicitly override them you can get situations like this:

<!-- on page /Person/View/1 -->
<a href="@Url.Action("View", "Pet")">View Animal</a>
<!-- URL resolves to /Pet/View/1 -->

Disaster – the ID from the current request has been included in the new URL!

In some cases this can be very useful – this is the reason that you don’t need to specify a controller if you are already within a view on the same controller – but can be very annoying when you want to create a URL in isolation (see here and here).

Using the Isolate Extension

To get around this problem I have written an Isolate extension method that can be used as below:

<!-- on page /Person/View/1 -->
<a href="@Url.Isolate(u => u.Action("View", "Pet"))">View Animal</a>
<!-- URL resolves to /Pet/View -->

The extension works by temporarily removing all of the existing route values from the specified instance of UrlHelper, executing the action, and then re-inserting the original route values before returning the result.

public static TResult Isolate<TResult>(this UrlHelper urlHelper, Func<UrlHelper, TResult> action)
{
	var currentData = urlHelper.RequestContext.RouteData.Values.ToDictionary(kvp => kvp.Key);
	urlHelper.RequestContext.RouteData.Values.Clear();
	try
	{
		return action(urlHelper);
	}
	finally
	{
		foreach (var kvp in currentData)
			urlHelper.RequestContext.RouteData.Values.Add(kvp.Key, kvp.Value.Value)
	}
}

It’s a basic solution and there are some (predictable) scenarios where it will fall down, but it solved my immediate problem without adding to much bloat to the code.

Handling ‘this’ in ko.command

Update: this feature is now available as part of the ko.plus library available on GitHub and NuGet!


The problem of context – the this value – in JavaScript is one that seems to keep causing problems.  Languages with similar syntax (C#, Java) do not allow the developer to alter the value of this and so people don’t always expect that it can change.

JavaScript likes to be different though.

I don’t want to get into too much detail on how this behaves – there are already more than enough articles out there that cover the topic to a greater depth than I could (e.g. Scope in Javascript); instead, this is a post about how I have worked around the issue in my ko.command library.

Problematic Command Context

Take the following simple example usage of the command library.

function ViewModel() {
    this.value = ko.observable(123);
    this.increment = ko.command(function() {
        this.value(this.value()+1);
    });
}

var viewModel = new ViewModel();
viewModel.increment();
//viewModel.value() => 124

The increment command adds 1 to the value of an observable property on the same view model; everything is working so far, but what if we move the implementation of the command action onto the prototype?

function ViewModel() {
    this.value = ko.observable(123);
    this.increment = ko.command(this._increment);
}

ViewModel.prototype._increment = function() {
    this.value(this.value()+1);
}

We might expect this to behave in the same way as before, but now when we call increment we get an error that, after a little investigation, we can see is because the value of this within the _increment function is not set to the view model.

Now It (mostly) Just Works

This behaviour was actually down to a design decision in the earlier version of the library to set the context of the various callbacks to the command itself; only recently has this started to cause problems that have prompted me to fix it.

The updated behaviour (available for download from Github) now endeavours to “just work” wherever possible.  This means that in the scenario above there are no code changes required to use the prototype implementation.

To be specific, the command action will always be invoked in the context from which the command was called which should (in most cases) behave in a way that seems to make sense.

There are some specific scenarios where a little more work is needed though.

CanExecute Function

The canExecute property on any ko.command instance is currently implemented as a computed observable, and as explained in the Knockout documentation, computed observables can be a little tricky when dealing with the context.

The behaviour does make sense when you consider the cause: computed observables will be re-evaluated whenever a dependent observable changes, so there is no way for them to (automatically) guarantee the context in which it will be invoked.

It is, however, possible to explicitly specify a context for a computed observable, so ko.command has been extended to mimic this implementation:

function ViewModel() {
    this.value = ko.observable(123);
    this.increment = ko.command({
        context: this,
        action: this._increment,
        canExecute: this.somethingThatReliesOnScope
    });
}

In this example the value of this when running the canExecute function will always be set to the current instance of ViewModel.

Note: explicitly setting the context in this way will also override the default behaviour when invoking commands, so use it carefully!

Asynchronously-Invoked Callbacks

The callbacks to an asynchronous command can be attached using the done/fail/always functions and are invoked once the promise returned by the action has completed.

function ViewModel() {
    this.value = ko.observable(123);
    this.incrementAsync = ko.command(function() {
        var promise = $.Deferred();
        promise.resolve();
        return promise.promise();
    }).done(this._increment);
}

But in what context will they be executed?

This scenario is a little more complicated because the promise implementation itself is able to specify the context in which callbacks should be invoked (in jQuery this is achieved using resolveWith).  In this case we have 3 choices:

  1. Do nothing.  Leave the context for the callback as whatever is set by the promise
  2. Replace the promise context with the context from the command
  3. Replace the promise context, but only if it has been explicitly specified.

For the time being I decided to leave the behaviour unchanged as it feels like changing it – forcing the context back to the command context – would be breaking the expected behaviour of whoever has explicitly (or implicitly) set the context of the callback.

ProxyApi & Anti-Forgery Tokens

Anti-Forgery Tokens?

Good question.  Anti-forgery tokens are a recommended way of preventing one of the OWASP Top Ten security vulnerabilities: Cross Site Request Forgery, or CSRF.

CSRF works on the basis that once you have logged into YourSite using your browser, any request to that domain will share the authentication information.  Normally, requests to YourSite would come from YourSite, but other developers are perfectly capable of writing some code on their site that tries to make a request to YourSite to do something evil.

Though there are a few ways of preventing or reducing the risk of CSRF attacks, anti-forgery tokens are the currently recommended approach.

So how do they work?  Whenever the server serves up a page that may result in a submission (e.g. a page that contains a form) it sets a randomly-generated cookie value.  The client must then include the random value in both a hidden form field and the request cookie; otherwise, the server will reject the request as invalid.  Attackers will not be able to read the cookie value; therefore they cannot include it as a form field and so their attack fails.

ASP.NET MVC Implementation

MVC makes it very easy to implement anti-forgery tokens.  Very easy.

Step 1: add an attribute to your action or controller

[ValidateAntiForgeryToken]
public ActionResult DoSomething()
{
    //…
}

Step 2: include the following within the form on the page

@Html.AntiForgeryToken()

Unfortunately WebAPI does not have a similar implementation, but there are thankfully a lot of examples out there (e.g. Kamranicus’ example & the MVC SPA template ) of how to achieve similar functionality that works with WebAPI.

So how can we adapt these ideas to work with ProxyApi?

ProxyApi Implementation

The intention of this library is to allow you to quickly create proxy classes for WebAPI methods; because it is expected to be running in the browser (it generates JavaScript, after all) it will be using cookie authentication and should therefore consider CSRF.

Ideally, the developer using the library doesn’t want to do anything more than they do for their MVC implementation, so it would seem like that is a good convention to follow.

Setting The Token

As with MVC, setting the cookie token and inserting the hidden form value onto the page is done by calling the Html.AntiForgeryToken() method in your view.  This is deliberately identical to the MVC method to keep things as consistent as possible.

Decorating the Controller

Following the same pattern as MVC and the examples listed above, the ProxyApi implementation uses an attribute that can be specified against a controller or an action:

[ValidateHttpAntiForgeryToken]
public void PostSomething(Something data)
{
    //...
}

This attribute is an extension of AuthorizationFilterAttribute that uses the cookie- and hidden tokens to validate the request.  The second value – the one that would normally be included as a hidden form field – is instead expected as a custom header value: X-RequestVerificationToken.  This approach avoids complications in combining the ProxyApi automatically-generated POST data and a custom form field.

Because WebAPI is often used for non-browser-based access, the attribute also allows you to optionally specify any types of authentication (e.g. Basic) that should be excluded from the verification process.

Passing the Hidden Token to the Server

The JavaScript implementation of the proxy objects allows you to specify either a concrete value or an accessor function to get the form field value:

$.proxies.myController.antiForgeryToken = "1234abc";

// or

$.proxies.myController.antiForgeryToken = function() { 
    return $("#someField").val();
};

By default, this function will use jQuery to locate the hidden input generated by the Html.AntiForgeryToken() method and use it’s value.

Summary

Overall, this implementation is nothing groundbreaking.  It borrows heavily from the the SPA MVC template and from other examples online but it does allow ProxyApi to prevent CSRF attacks with minimal change to the code for developers.

The source code for this is available on GitHub, and the updated package is available for download via nuget.

Exception Handling for Web API Controller Constructors

The generally-recommended best practice for exception handling within Web API is to to use exception filters.  Once registered, these classes sit in the processing pipeline for a message and can react to exceptions that are thrown by actions.

A Problem

The issue with the statement above is the qualifier “by actions”.  While an exception filter will correctly handle any errors thrown from within an action method, it will be bypassed by exceptions thrown during the creation of the controller.

These exceptions include two categories of error: exceptions thrown from within the controller constructor, and a failure to locate or invoke an appropriate constructor.  The latter problem is, for me, the more common – I use the Autofac MVC & WebAPI integrations (highly recommended, by the way) to handle dependency injection in controllers, and there are quite often scenarios where one of the dependencies is not available.  In these cases I really need a way to catch and to nicely handle those exceptions.

One way in which we can achieve this lofty aim is by creating a custom implementation of IHttpControllerActivator.

The Controller Activator

The IHttpControllerActivator interface only contains one method:

IHttpController Create(
	HttpRequestMessage request,
	HttpControllerDescriptor controllerDescriptor,
	Type controllerType
)

This method is responsible for creating and returning an instance of a specified controller before the API action is invoked.  This is perfect for our scenario because it is a very specific responsibility; we need a custom implementation, but we will not have to worry about how the controller type is selected, how the action is selected or how it is invoked.

Implementing a Decorator

To be honest, we don’t really want to get into how the controller is actually created – we just want to wrap it in a try { … } catch { … } – so instead of creating our own activator we should just write a decorator pattern to wrap the existing implementation.

public class ExceptionHandlingControllerActivator : IHttpControllerActivator
{
	private IHttpControllerActivator _concreteActivator;

	public ExceptionHandlingControllerActivator(IHttpControllerActivator concreteActivator)
	{
		_concreteActivator = concreteActivator;
	}
		
	public IHttpController Create(HttpRequestMessage request, HttpControllerDescriptor controllerDescriptor, Type controllerType)
	{
		try
		{
			return _concreteActivator.Create(request, controllerDescriptor, controllerType);
		}
		catch
		{
			//custom handler logic here
		}
	}
}

This simple class constructs on a concrete instance of IHttpControllerActivator, then calls down to that concrete instance within a try/catch block.  We can then implement our custom exception handling in the catch.

Now all we need to do is replace the default activator with our one.

Hooking It Up

We need to tell Web API to use our new controller activator instead of the default, and (as with so much else in Web API) we do this through the HttpConfiguration object; specifically, the Services property.

This comes with a convenient Replace method that allows us to insert our implementation in place of the default version.  We also want to pass that default into the constructor of our class, so we end up with something like this:

GlobalConfiguration.Configuration.Services.Replace(typeof(IHttpControllerActivator), 
	new ExceptionHandlingControllerActivator(
		GlobalConfiguration.Configuration.Services.GetHttpControllerActivator()
	)
);

It looks a little messy, but it’s not complicated: grab a reference to the current activator, pass it into our decorator, then pass that into the Replace method.

Simple!

Translating Date Formats in JavaScript

Urgh, dates.  It’s always bloody dates.  Whether it’s different time zones, localised formats (both long and short) or any one of a hundred other annoying little problems…somehow, whenever you start working with dates, everything goes to hell.

We can all agree on how we should be dealing with dates, right?  ISO 8601 for everything in the backend (transport & database), then a localised format to present to the user in the UI layer. As Randall Munroe recently said/drew:

Date Formats

That’s all well and good, but when your UI layer makes use of JavaScript you have the additional headache of dealing with the JavaScript Date object (though you can avoid real agony by using momentjs).

The Most Important Piece of (Off-Topic) Advice in this Post

If you are not already using momentjs for every date-related scenario then you are almost certainly wasting a lot of time.  It’s under 6kb, it will do more-or-less everything you want with dates…go get it now.

The problem with dates in JavaScript gets even worse if you are pulling your date format from somewhere other than the client.  Suppose that you have a “preferred date format” setting for each user on the server – you need to somehow pass that into your JavaScript.

Admittedly momentjs makes this easier: you can parse the dates from the server using ISO format and then display them in whatever crazy format you want.

var dateFromServerInIsoFormat = "2013-01-14",
    preferredFormatFromServer = "D * MMM * YYYY",
    parsed                    = moment(dateFromServerInIsoFormat, "YYYY-MM-DD"),
    formatted                 = parsed.format(preferredFormatFromServer);

console.log(formatted);
// -> 14 * Jan * 2013

This works – it will allow us to format dates within our JavaScript using a date format specified by the server – but it relies on an incorrect assumption: that the server and momentjs share a date format language.

One Format, Many Format Definitions

Perhaps the best way to demonstrate this problem is with an example.  Let’s assume that we are running ASP.NET on our server, using momentjs to handle dates within our JavaScript, and (just to spice things up) we have a couple of third party JavaScript components: jQuery UI DatepickerjqPlot.

We have a user setting stored on the server that defines our “preferred date format” as…

January 15, 2013

Let’s take a look at how we would define that for our four components.

.NET MMMM d, yyyy Nice and simple so far
momentjs MMMM D, YYYY Slightly different, but still pretty close
jQuery UI Datepicker MM d, yy Ok, getting a little bit trickier…
jqPlot %B %#d, %Y Wait, what?!

What’s going on here?!  You can just about see how you could convert from the .NET format to momentjs, or even to the jQuery UI one without too much pain, but jqPlot is just a mess!

Translating Between Formats

After being plagued by numerous variations on this problem on a recent project, I decided to deal with it properly by creating something to translate between (theoretically) any 2 formats.

The dateFormat object contains definitions for the different format languages, and the convert function translates between them:

var formatFromServer = "MMMM d, yyyy",
    momentFormat = dateFormat.convert(formatFromServer,
                                      dateFormat.dotnet,
                                      dateFormat.moment),
    jqPlotFormat = dateFormat.convert(formatFromServer,
                                      dateFormat.dotnet,
                                      dateFormat.jqplot);

console.log("moment: " + momentFormat);
// -> moment: MMMM D, YYYY

console.log("jqPlot: " + jqPlotFormat);
// -> jqPlot: %B %#d, %Y

Each language definition contains a list of mappings from named tokens (e.g. day-of-week) to their representation in the format:

dateFormat.dotnet = {
    "day-of-month-1": "d",
    "day-of-month-2": "dd",
    "day-of-week-abbr": "ddd",
    "day-of-week": "dddd"
    //...
};

This allows the convert function to create a mapping from one language to another, then use regular expressions to locate and replace matching tokens in the source string.

The source code is available on GitHub along with the tests and some distributables.  At the moment the library only supports 3 formats (.NET, momentjs and jqplot) as these were the 3 I needed most recently, but that list will expand over time – hopefully with some help from the community.

This jsFiddle has a running example of the conversion so feel free to play around, or grab the source yourself from one of the links below.

That’s one fewer headache when you next find yourself stuck working with dates!

Explicitly Controlling Cache Dependencies in MVC

In the past, I have always used the OutputCacheAttribute  when I wanted to cache the result of an Action in my MVC application; it’s simple, and it gets basic caching up and running with very little effort.

Unfortunately “simple and quick” are not quite as useful when you need a little more control…

Scenario

Let’s say you have a resource that you automatically generate in your controller, but that creating that resource takes a long time…

public async Task<ActionResult> Slow()
{
	var resource = await _service.GenerateResource();
	// ...5 minutes later...
	return View(resource);
}

Understandably, you want to cache the result of this action so that subsequent visitors don’t all have to wait around for the resource to be created.  [OutputCache] to the rescue, right?

Not So Fast!

Unfortuantely, the resource is not entirely constant – it can be altered by certain user actions in other areas of the application, and when that happens it will need to be regenerated.

[OutputCache] will allow us to vary the cached result by a number of variables (host, action arguments, custom strings…) and supports timeouts etc, but it does not allow another piece of code somewhere in the application to say “that resource is not longer valid”.

An Alternative to [OutputCache]

One alternative means of caching the results of an action is to call the AddCacheDependency method on the Response object:

public async Task<ActionResult> Index()
{
	Response.AddCacheDependency(new CacheDependency(...));
			
	//...
}

The CacheDependency instance in this example is an ASP.NET class that is used by the caching framework to determine when the resource we created has been changed.

The base CacheDependency implementation allows you to specify one or more file system paths that will be monitored, and will invalidate itself when any of those files is updated.  There is also a SqlCacheDependency class that observes the results of a SQL query and invalidates when they change.

Neither of these implementations will give us quite what we are looking for –  the ability to invoke a method from anywhere within the codebase that explicitly marks the cached content as changed – but they are leading us in the right direction.

If we can create a CacheDependency instance that is uniquely identifiable for the slow resource then we can add it to the response and force it to invalidate itself at a later date.

Extending CacheDependency

Before we can get into how we add our CacheDependency instance, we need to create an implementation that will allow us to explicitly invalidate it through code.  Thankfully, CacheDependency exposes a number of methods to it’s inheriting classes that mean we can achieve our explicit invalidate very easily.

The minimum that we need to do to make the CacheDependency work is to pass a list of paths into the base constructor and to provide a valid value from the GetUniqueID method.  We know that we do not want to depend on any file system resources so we can just pass an empty list into the constructor, and as we need a unique ID anyway (to identify the cached resource later) we can just pass this into the constructor.

class ExplicitCacheDependency : CacheDependency
{
	private string _uniqueId;

	public ExplicitCacheDependency(string uniqueId)
		: base(new string[0]) //no file system dependencies
	{
		_uniqueId = uniqueId;
	}
		
	public override string GetUniqueID()
	{
		return _uniqueId;
	}
}

CacheDependency has a protected NotifyDependencyChanged method that will notify the caching framework that the cached item is no longer valid. In most implementations this would be invoked in some callback, but for our purposes we can just add a new Invalidate method and invoke it directly:

public void Invalidate()
{
	base.NotifyDependencyChanged(this, EventArgs.Empty);
}

Voila – a cache dependency that we can explicitly invalidate.  But how can we get a reference to this in the various places that we need it?

CacheDependencyManager

Creating a new cache dependency doesn’t help us much if we can’t get a reference to it later – otherwise, how can we call Invalidate?  Let’s create a new class that handles the creation and invalidation of the new cache dependencies: CacheDependencyManager.

class CacheDependencyManager
{
	private Dictionary<string, ExplicitCacheDependency> _dependencies
		= new Dictionary<string, ExplicitCacheDependency>();

	public CacheDependency GetCacheDependency(string key)
	{
		if (!_dependencies.ContainsKey(key))
			_dependencies.Add(key, new ExplicitCacheDependency(key));

		return _dependencies[key];
	}

	public void InvalidateDependency(string key)
	{
		if (_dependencies.ContainsKey(key))
		{
			var dependency = _dependencies[key];
			dependency.Invalidate();
			dependency.Dispose();
			_dependencies.Remove(key);
		}
	}
}

Note: in the interests of brevity I have not included the thread-safe version here; this is a terrible idea in the real world, so make sure you include some appropriate locking!

This CacheDependencyManager is intended to hide the detail of how the dependency instances are created and invalidated from calling classes, which it achieves through 2 public methods:

  • GetCacheDependency that creates a new ExplicitCacheDependency if none exists for the specified key, or returns the cached one if it has been previously created
  • InvalidateDependency that attempts to locate an existing cache dependency for the specified key, and invalidates and removes it if one is found.  If one doesn’t exist then it does nothing, so callers don’t need to know whether or not the resource has already been cached

One quick singleton pattern later and we can call these methods from throughout our codebase. When we invoke our slow action for the first time we need to add the dependency to the response using a unique key to identify the slow resource:

Response.AddCacheDependency(
		CacheDependencyManager.Instance.GetCacheDependency("ResourceKey"));

And how do we invalidate the resource after some arbitrary user action? We can just pass that same key into the Invalidate:

public class SomeOtherController : Controller
{
	public ActionResult SomeRandomUserAction()
	{
		CacheDependencyManager.Instance.Invalidate("ResourceKey")
		//...
	}
}

Obviously you could (should!) use dependency injection to pass the CacheDependencyManager to your controllers instead of accessing it directly, but the singleton will suffice for this example.

That’s All Folks

That’s the lot – now we can manually invalidate our slow resource whenever and from wherever we want, and our users can enjoy speedy cached resources the rest of the time!