ASP.NET Core Feature Flag TagHelper

Releasing stuff is dangerous: you might break things, you might annoy your users or you could screw up in any number of entertaining ways.

Feature flags are a great way to get functionality into production without quite so much risk. You can release your new feature to only a small subset of your users and then roll it out once you’re happy that things aren’t on fire.

The way you define flags will depend on the requirements for roll out – sometimes a configuration setting is sufficient; sometimes you’ll need per-user settings or something more complex. That’s outside the scope of this article though – I’ll leave that part up to you.

Once you have your flags defined you want to start modifying content based on those flags. You coulds do that with a bunch of if statements but when we’re talking about Razor it can get messy fast.

Instead, wouldn’t it be nice to wrap your new stuff in a special tag?

<feature flag="MyCoolNewThing">
  <!-- cool content here -->
</feature>

Or if you want to display something only for users without the new feature?

<feature flag="MyCoolNewThing" disabled>
  Click here to enable my new cool thing!
  <button>Enable Now!</button>
</feature>

Tag Helpers make this easy!

The TagHelper Class

Tag helpers allow you to write server-side code that manipulates the DOM during the render of a Razor view. They can accept dependency-injected services (in the scope of the current request) and can access attributes and child content of their Razor element.

They are implemented by extending the TagHelper class and are used in Razor views by converting the class name to a kebab-cased equivalent. e.g. MyGreatNewTagHelper would be available via the my-great-new tag.

The class then modifies the generated DOM by overriding either the Process or ProcessAsync methods. These methods are passed context objects to allow both interrogation of the current content and modification of the output.

public override async Task ProcessAsync(TagHelperContext context, TagHelperOutput output)
{
  //...
}

Hiding Child Content

In our case, the “modification” we want is really simple: we want to optionally hide all child content. This is directly supported via the TagHelperOutput.SuppressContent() method so we can hide content with a 1 line method:

public override async Task ProcessAsync(
  TagHelperContext context, 
  TagHelperOutput output)
{
  output.SuppressContent();
}

note that we don’t need to be using the ProcessAsync variant as we have no async code yet, but we’ll be adding some shortly

Optionally Hiding Child Content

We only want to hide the child content if a feature flag is disabled, so we need to know the state of the flag. There are a lot of ways that feature flag settings could be implemented (configuration settings, per-user flags, etc.) so we are going to abstract all of that away behind an interface:

public interface IFeatureFlagProvider {
  Task<bool> IsEnabled(FeatureFlag featureFlag);
}

public enunm FeatureFlag {
  Unknown,
  MyCoolNewThing,
  AnotherAwesomeFeature,
  Etc
}

The IFeatureFlagProvider accepts an enum value identifying a feature and asynchronously returns a boolean indicating whether or not the feature is enabled. Any complexity around how you determine the availability of the feature can happily hide behind this facade.

Note: I’m using an enum to define my features but strings are a valid alternative. I prefer enums because you can find all references easily and if you’re adding new features then you’re going to be changing code anyway!

As I said above, tag helpers can accept injected dependencies so as soon as we have registered an implementation of IFeatureFlagProvider we can use it in our helper. We’re also going to add a Flag property which will be set on each instance of the tag.

public class FeatureTagHelper : TagHelper
{
  private readonly IFeatureFlagProvider _featureFlagProvider;

  public FeatureTagHelper(IFeatureFlagProvider featureFlagProvider)
  {
    _featureFlagProvider = featureFlagProvider;
  }

  public FeatureFlag Flag { get; set; }

  public override async Task ProcessAsync(TagHelperContext context, TagHelperOutput output)
  {
    var isFeatureEnabled = await _featureFlagProvider.IsEnabled(this.Flag);
    if (!isFeatureEnabled)
      output.SuppressContent();
  }
}

Support Disabled State

We may want to show content only when a feature is not enabled (e.g. an “enable now” message) so we want to support the inverse. One option would be to create a second tag helper but I felt like a disabled flag made more sense when in Razor:

<feature flag="FeatureName" disabled>...</feature>

To achieve this we can use the supplied TagHelperContext to look for a disabled attribute and then combine this with our isFeatureEnabled condition from earlier:

public override async Task ProcessAsync(TagHelperContext context, TagHelperOutput output)
{
    var isFeatureEnabled = await _featureFlagProvider.IsEnabled(this.Flag);
    var isTagDisabled = context.AllAttributes.Where(a => a.Name?.ToLowerInvariant() == "disabled").Any();

    var showContent = isFeatureEnabled != isTagDisabled;
    if (!showContent)
      output.SuppressOutput();
  }
}

Using the TagHelper

Now that we’ve created the tag helper we can start using it in our Razor views.

Before it becomes available we will need to add it to the project in the Views/_ViewImports.cshtml file that is created as part of the ASP.NET Core templates.

You can either add all tag helpers in your project or add each one individually.

<!-- add all in assembly MyProject -->
@addTagHelper *, MyProject
<!-- or add individually -->
@addTagHelper MyProject.TagHelpers.FeatureTagHelper, MyProject

Once that’s done you can use the new tags in any of your Razor pages or views as below:

<feature flag="MyCoolNewThing">
  <!--
    content will only be displayed if FeatureFlag.MyCoolNewThing is enabled
    for the current user
  -->
  <h1>Cool New Thing</h1>
</feature>

<feature flag="MyCoolNewThing" disabled>
  <!--
    content will only be displayed if FeatureFlag.MyCoolNewThing is disabled
    for the current user
  -->
  Click here to enable my new cool thing!
  <button>Enable Now!</button>
</feature>
Advertisements

Supporting SignalR Client Handlers after Connection Start

(Yes, that is a pretty specific post title but then this is a pretty specific problem…)

In general, when you create a new SignalR connection you are obliged to have already defined any of your handlers on the connection.yourHubName.client object. This allows SignalR to discover those handlers and hook them up to the incoming messages.

Problem: Multiple connection sources

This approach is fine as long as you have a single place from which you are starting your connection but what if you have 2 hubs, 2 separate client handlers…2 of everything?

They will both automatically share a SignalR connection so you can end up with a bit of a race condition where the first handler to start the connection will be the only handler registered.  Imagine the following handlers…

function MyFirstHandler() {
  //assign the handler
  $.connection.myHub1.client.method1= function() { ... };

  //start the connection
  $.connection.myHub1.connection.start();
}

function MySecondHandler() {
  //assign the handler
  $.connection.myHub2.client.method2= function() { ... };

  //start the connection
  $.connection.myHub2.connection.start();
}

//...some time later...
new MyFirstHandler()
//...and even later still...
new MySecondHandler()

By the time we create MySecondHandler we have already created the connection and so method2 is not attached and will never be invoked.

Solution: Proxy implementation

We can work around this by replacing the connection.yourHubName.client object (normally just a POJO) with something that is aware of the available server methods.  The new client then exposes stubs to which SignalR can connect before our MySecondHandler can provide the “real” handler implementations.

//before creating any handlers
$.connection.myHub1.client = new SignalRClient(['method1','otherHandler']);
$.connection.myHub2.client = new SignalRClient(['method2']);

The SignalRClient implementation has 3 requirements for each named handler:

  1. Always return a valid handler function for SignalR to bind, even if the real handler hasn’t been assigned yet
  2. If the real handler has been assigned, invoke that when the handler is invoked (with all args etc.)
  3. Allow client.myHandler = function(){} assignments for compatibility with existing code

The last requirement means that we need to use Object.defineProperty with custom getter and setter implementations.  The getter should always return a stub method; the setter should store the real handler; and the stub method should invoke the real handler (if assigned).

function SignalRClient(methods) {
	this._handlers = {};
	methods.forEach(this.registerHandler.bind(this));
}

SignalRClient.prototype.invokeHandler = function(name) {
	var handler = this._handlers[name];
	if (handler) {
		var handlerArgs = Array.prototype.slice.call(arguments, 1);
		handler.apply(this, handlerArgs);
	}
}

SignalRClient.prototype.registerHandler = function(name) {
	var getter = this.invokeHandler.bind(this, name);
	Object.defineProperty(this, name, {
		enumerable: true,
		get: function() { return getter },
		set: function (value) { this._handlers[name] = value; }.bind(this)
	});
}

Note that our defined properties must also be marked as enumerable so that the SignalR code picks up on them when it attempts to enumerate the client handler methods.

Now – provided we know the available methods up front – we can start the connection whenever we like and assign our handlers later!

Hiding ProxyApi Routes from Web API Help Pages

If you are using ProxyApi and you have tried out the Web API Help Pages feature then you will have noticed a bunch of duplicate routes showing up for all of your actions that look something like this:

GET /api/{proxy}/Controller/Action?foo=bar

ProxyApi needs to be certain of the Route-to-Controller/Action mapping in order to correctly generate the JavaScript proxies, and it achieves this by inserting a custom route at the start of the route table so that it will always take precedence (if matched).

Unfortunately the Web API ApiExplorer finds these routes, maps them to the action and generates a duplicate route for every action in your API!

Getting Rid of the Routes

Thankfully it is very simple to filter these out.  When you add the Web API help pages package to your project it will generate a LOT of code that builds and renders the help page content.  This gives you plenty of entry points in which you can intercept and hide the ProxyApi-specific routes.

For our purposes here we can subclass the ApiExplorer class and filter out any route that contains “{proxy}”.

public class CustomApiExplorer : ApiExplorer
{
  public CustomApiExplorer(HttpConfiguration config) : base(config)
  {}

  public override bool ShouldExploreAction(string actionVariableValue, HttpActionDescriptor actionDescriptor, IHttpRoute route)
  {
    if (route.RouteTemplate.ToLower().Contains("{proxy}"))
      return false;

    return base.ShouldExploreAction(actionVariableValue, actionDescriptor, route);
  }
}

Now we just need to plug this implementation in instead of the default…

//in your help page configuration
config.Services.Replace(typeof(IApiExplorer), new CustomApiExplorer(config));

…and we’re done!

Selenium: Early Thoughts on Test Automation

I have recently been running a trial of Selenium to automate some of our regression and integration testing. I have only been looking into this for a short amount of time so I am by no means an expert but this post contains a few of my observations so far.

For those of you that are not familiar with it, Selenium is a browser automation system that allows you to write integration tests to control a browser and check the response of your site. An example of a Selenium script might look like this:

  1. Open the browser
  2. Browse to the login page
  3. Enter “user 1” in the input with ID #username
  4. Enter “pa$$word” in input with ID #password
  5. Click the Login button and wait for the page to load
  6. Check that the browser has navigated to the users home page

Selenium as a framework comes in 2 flavours: IDE & WebDriver.

Selenium IDE

IDE uses a record-and-playback system to define the script and to run the tests. It is implemented as a FireFox plugin and is therefore limited to FireFox only.

We had run a previous trial using this version where we attempted to have our QA team record and execute scripts as part of functional and regression testing. We found that this had a number of problems and eventually abandoned the trial:

  • Limited to FireFox
  • Has to be run manually (i.e. Cannot be run automatically on a build server)
  • Often requires some basic understanding of JavaScript or CSS selectors to work through a problem in a script; this was sometimes beyond the technical knowledge of our QA team
  • Automatically-generated selectors are often extremely fragile. Instead of input#password, it might generate body > div.main-content > form > input:last-child. This meant that a lot of time was lost to maintenance and that the vast majority of “errors” reported by the script were actually incorrect selectors.

We decided that there we too many disadvantages with this option and so moved onto Selenium WebDriver.

Selenium WebDriver

WebDriver requires that all scripts are written in the programming language of your choice. This forced the script-writing task onto our development team instead of QA, but also meant that development best-practices could be employed to improve the quality and maintainability of the scripts.

This version of Selenium also (crucially) supports multiple browsers and can be run as part of an automated nightly build so seemed like a much better fit.

Whilst writing our first few Selenium tests we came up with a few thoughts on the structure

Use a Base Fixture for Multiple Browser Testing

This is a nice simple one – we did not want to write duplicate tests for all browsers so we made use of the Generic Test Fixture nUnit feature to automatically run our tests in the 4 browsers in which we were interested.

We created a generic base fixture class for all our tests and decorated it with the TestFixture<TDriver> attribute. This instructs nUnit to instantiate and run the class for each of the specified generic types, which in turn means any test we write in such a fixture will automatically be run against each browser

[TestFixture(typeof(ChromeDriver))]
[TestFixture(typeof(InternetExplorerDriver))]
[TestFixture(typeof(FirefoxDriver))]
public abstract class SeleniumTestFixtureBase<TWebDriver>
	where TWebDriver : IWebDriver
{
	protected IWebDriver Driver { get; private set; }

	[SetUp]
	public void CreateDriver()
	{
		this.Driver = DriverFactory.Instance
			.CreateWebDriver<TWebDriver>();
			
		//...
	}
}

This does have some disadvantages when it comes to debugging tests as there are always 4 tests with the same method name but this has only been a minor inconvenience so far – the browser can be determined from the fixture class name where needed.

Wrap Selectors in a “Page” Object

The biggest problem with our initial trial of “record and playback” automated tests was the fragility of our selectors. Tests would regularly fail when manual testing would demonstrate the feature clearly working, and this was almost always due to a subtle change in the DOM structure.

If your first reaction to a failing test is to say “the test is probably broken” then your tests are useless!

A part of the cause was that the “record” part of the feature does not always select the most sensible selector to identify the element. We assumed that by hand-picking selectors we would automatically improve the robustness (is that a word?) of our selectors, but in the case where they did change we still did not want to have to update a lot of places. Similarly, we did not want to have to work out what a selector was trying to identify when debugging tests.

Our solution to this was to create a “Page” object to wrap the selectors for each page on the site in meaningfully named methods. For example, our LoginPage class might look like this:

public class LoginPage
{
	private IWebDriver _driver;

	public LoginPage(IWebDriver driver)
	{
		_driver = driver;
	}

	public IWebElement UsernameInput()
	{
		return _driver.FindElement(By.CssSelector("#userName"));
	}

	public IWebElement PasswordInput()
	{
		return _driver.FindElement(By.CssSelector("#Password"));
	}
}

This has a number of advantages:

  • Single definition of the selector for a given DOM element
    We only ever define each element once
  • Page inheritance
    We can create base pages that expose page elements which appear on multiple pages (e.g. the main navigation or the user settings menu)
  • Creating helper methods
    When we repeat blocks of functionality (e.g. Enter [usename], enter [password] then click Submit) we are able to encapsulate them on the Page class instead of private methods within the test fixture.

We also created factory extension methods on the IWebDriver element to improve readability

public static class LoginPageFactory
{
	public static LoginPage LoginPage(this IWebDriver driver)
	{
		return new LoginPage(driver);
	}
}

//...
this.Driver.LoginPage().UsernameInput().Click()

Storing Environment Information

We decided to store our environmental variables in code to improve reuse and readability. This is only a minor point but we did not want to have any URLs, usernames or configuration options hard coded in the tests.

We structured our data so we could reference variables as below:

TestEnvironment.Users.AdminUsers[0].Username

Switching between Debug & Release Mode

By storing environment variables in code we created another problem: how to switch between running against the test environment and against the local developer environment.

We solved this by loading certain changeable elements of our configuration from .config files based on a #DEBUG flag

Other Gotchas

  • The 64bit IE driver for Selenium IDE is incredibly slow! Uninstall it and install the 32-bit one
  • Browser locale can – in most cases – be set using a flag when creating the driver. One exception to this is Safari for Windows, which does not seem to allow you to change the locale at all – even through Safari itself!

Summary

We are still in the early phases of this trial but it is looking like we will be able to make Selenium automation a significant part of our testing strategy going forward.

Hopefully these will help out other people. If you have any suggestions of your own then leave them in the comments on message me on Twitter (@stevegreatrex).

Excluding Current RouteData from UrlHelper

By default, the MVC UrlHelper will include all of the route values for the current route in it’s calculations.

This means that unless you explicitly override them you can get situations like this:

<!-- on page /Person/View/1 -->
<a href="@Url.Action("View", "Pet")">View Animal</a>
<!-- URL resolves to /Pet/View/1 -->

Disaster – the ID from the current request has been included in the new URL!

In some cases this can be very useful – this is the reason that you don’t need to specify a controller if you are already within a view on the same controller – but can be very annoying when you want to create a URL in isolation (see here and here).

Using the Isolate Extension

To get around this problem I have written an Isolate extension method that can be used as below:

<!-- on page /Person/View/1 -->
<a href="@Url.Isolate(u => u.Action("View", "Pet"))">View Animal</a>
<!-- URL resolves to /Pet/View -->

The extension works by temporarily removing all of the existing route values from the specified instance of UrlHelper, executing the action, and then re-inserting the original route values before returning the result.

public static TResult Isolate<TResult>(this UrlHelper urlHelper, Func<UrlHelper, TResult> action)
{
	var currentData = urlHelper.RequestContext.RouteData.Values.ToDictionary(kvp => kvp.Key);
	urlHelper.RequestContext.RouteData.Values.Clear();
	try
	{
		return action(urlHelper);
	}
	finally
	{
		foreach (var kvp in currentData)
			urlHelper.RequestContext.RouteData.Values.Add(kvp.Key, kvp.Value.Value)
	}
}

It’s a basic solution and there are some (predictable) scenarios where it will fall down, but it solved my immediate problem without adding to much bloat to the code.

ProxyApi & Anti-Forgery Tokens

Anti-Forgery Tokens?

Good question.  Anti-forgery tokens are a recommended way of preventing one of the OWASP Top Ten security vulnerabilities: Cross Site Request Forgery, or CSRF.

CSRF works on the basis that once you have logged into YourSite using your browser, any request to that domain will share the authentication information.  Normally, requests to YourSite would come from YourSite, but other developers are perfectly capable of writing some code on their site that tries to make a request to YourSite to do something evil.

Though there are a few ways of preventing or reducing the risk of CSRF attacks, anti-forgery tokens are the currently recommended approach.

So how do they work?  Whenever the server serves up a page that may result in a submission (e.g. a page that contains a form) it sets a randomly-generated cookie value.  The client must then include the random value in both a hidden form field and the request cookie; otherwise, the server will reject the request as invalid.  Attackers will not be able to read the cookie value; therefore they cannot include it as a form field and so their attack fails.

ASP.NET MVC Implementation

MVC makes it very easy to implement anti-forgery tokens.  Very easy.

Step 1: add an attribute to your action or controller

[ValidateAntiForgeryToken]
public ActionResult DoSomething()
{
    //…
}

Step 2: include the following within the form on the page

@Html.AntiForgeryToken()

Unfortunately WebAPI does not have a similar implementation, but there are thankfully a lot of examples out there (e.g. Kamranicus’ example & the MVC SPA template ) of how to achieve similar functionality that works with WebAPI.

So how can we adapt these ideas to work with ProxyApi?

ProxyApi Implementation

The intention of this library is to allow you to quickly create proxy classes for WebAPI methods; because it is expected to be running in the browser (it generates JavaScript, after all) it will be using cookie authentication and should therefore consider CSRF.

Ideally, the developer using the library doesn’t want to do anything more than they do for their MVC implementation, so it would seem like that is a good convention to follow.

Setting The Token

As with MVC, setting the cookie token and inserting the hidden form value onto the page is done by calling the Html.AntiForgeryToken() method in your view.  This is deliberately identical to the MVC method to keep things as consistent as possible.

Decorating the Controller

Following the same pattern as MVC and the examples listed above, the ProxyApi implementation uses an attribute that can be specified against a controller or an action:

[ValidateHttpAntiForgeryToken]
public void PostSomething(Something data)
{
    //...
}

This attribute is an extension of AuthorizationFilterAttribute that uses the cookie- and hidden tokens to validate the request.  The second value – the one that would normally be included as a hidden form field – is instead expected as a custom header value: X-RequestVerificationToken.  This approach avoids complications in combining the ProxyApi automatically-generated POST data and a custom form field.

Because WebAPI is often used for non-browser-based access, the attribute also allows you to optionally specify any types of authentication (e.g. Basic) that should be excluded from the verification process.

Passing the Hidden Token to the Server

The JavaScript implementation of the proxy objects allows you to specify either a concrete value or an accessor function to get the form field value:

$.proxies.myController.antiForgeryToken = "1234abc";

// or

$.proxies.myController.antiForgeryToken = function() { 
    return $("#someField").val();
};

By default, this function will use jQuery to locate the hidden input generated by the Html.AntiForgeryToken() method and use it’s value.

Summary

Overall, this implementation is nothing groundbreaking.  It borrows heavily from the the SPA MVC template and from other examples online but it does allow ProxyApi to prevent CSRF attacks with minimal change to the code for developers.

The source code for this is available on GitHub, and the updated package is available for download via nuget.

Exception Handling for Web API Controller Constructors

The generally-recommended best practice for exception handling within Web API is to to use exception filters.  Once registered, these classes sit in the processing pipeline for a message and can react to exceptions that are thrown by actions.

A Problem

The issue with the statement above is the qualifier “by actions”.  While an exception filter will correctly handle any errors thrown from within an action method, it will be bypassed by exceptions thrown during the creation of the controller.

These exceptions include two categories of error: exceptions thrown from within the controller constructor, and a failure to locate or invoke an appropriate constructor.  The latter problem is, for me, the more common – I use the Autofac MVC & WebAPI integrations (highly recommended, by the way) to handle dependency injection in controllers, and there are quite often scenarios where one of the dependencies is not available.  In these cases I really need a way to catch and to nicely handle those exceptions.

One way in which we can achieve this lofty aim is by creating a custom implementation of IHttpControllerActivator.

The Controller Activator

The IHttpControllerActivator interface only contains one method:

IHttpController Create(
	HttpRequestMessage request,
	HttpControllerDescriptor controllerDescriptor,
	Type controllerType
)

This method is responsible for creating and returning an instance of a specified controller before the API action is invoked.  This is perfect for our scenario because it is a very specific responsibility; we need a custom implementation, but we will not have to worry about how the controller type is selected, how the action is selected or how it is invoked.

Implementing a Decorator

To be honest, we don’t really want to get into how the controller is actually created – we just want to wrap it in a try { … } catch { … } – so instead of creating our own activator we should just write a decorator pattern to wrap the existing implementation.

public class ExceptionHandlingControllerActivator : IHttpControllerActivator
{
	private IHttpControllerActivator _concreteActivator;

	public ExceptionHandlingControllerActivator(IHttpControllerActivator concreteActivator)
	{
		_concreteActivator = concreteActivator;
	}
		
	public IHttpController Create(HttpRequestMessage request, HttpControllerDescriptor controllerDescriptor, Type controllerType)
	{
		try
		{
			return _concreteActivator.Create(request, controllerDescriptor, controllerType);
		}
		catch
		{
			//custom handler logic here
		}
	}
}

This simple class constructs on a concrete instance of IHttpControllerActivator, then calls down to that concrete instance within a try/catch block.  We can then implement our custom exception handling in the catch.

Now all we need to do is replace the default activator with our one.

Hooking It Up

We need to tell Web API to use our new controller activator instead of the default, and (as with so much else in Web API) we do this through the HttpConfiguration object; specifically, the Services property.

This comes with a convenient Replace method that allows us to insert our implementation in place of the default version.  We also want to pass that default into the constructor of our class, so we end up with something like this:

GlobalConfiguration.Configuration.Services.Replace(typeof(IHttpControllerActivator), 
	new ExceptionHandlingControllerActivator(
		GlobalConfiguration.Configuration.Services.GetHttpControllerActivator()
	)
);

It looks a little messy, but it’s not complicated: grab a reference to the current activator, pass it into our decorator, then pass that into the Replace method.

Simple!