Post-Logout Redirect with ASP.NET Core and ADFS 2016

Redirect after logout

When a user logs out from your app you have the option to log them out of the provider as well by redirecting the browser to the logout endpoint. By default this means that the user will end up sat on your providers “You have signed out” page – not brilliant.

You can, however, tell your provider to redirect back to your app once they’re done with logout by specifying a post_logout_redirect_uri.

For ASP.NET Core Identity you can specify this redirection as a parameter on the SignOutResult.

[Route("auth")]
public class AuthController
{
  [HttpPost("logout")]
  public IActionResult Logout() => new SignOutResult(
    OpenIdConnectDefaults.AuthenticationScheme,
    new AuthenticationProperties { RedirectUri = Url.Action(nameof(LogoutSuccess))});

  [HttpGet("logoutSuccess")]
  public IActionResult LogoutSuccess() => View();
}

Useless ADFS error messages

For ADFS 2016 you need to do a little bit more than just set the redirect URL. On first inspection you can see that the above will set the parameter in the ADFS URL but ADFS will silently ignore it and your user will sit forever on the ADFS sign-out page.

Digging into the event logs you will find the following error message:

The specified redirect URL did not match any of the OAuth client’s redirect URIs. The logout was successful but the client will not be redirected

If you’re unlucky, this wonderfully-misleading error message can take you down a rabbit hole of further configuration. It’s a shame, then, that no-one thought to expose something a little more accurate:

That redirect looks good but you need to specify id_token_hint or we’ll ignore you

Thanks ADFS!

Sending ID Token Hint

To be fair to ADFS, sending an id_token_hint is recommended by the spec. This parameter needs to be set to the id_token that was sent to your app when the user first logged in; provide this value and ADFS will happily redirect back to your app.

The only problem here is that you probably don’t still have that id_token. ASP.NET Core Identity uses the identity information to create an auth cookie and then (by default) discards it.

Happily, the OpenIdConnectOptions exposes a SaveTokens property to persist the received token to the auth cookie. Even better: the OIDC logout mechanism will automatically pick this up once enabled so you should be good to go as soon as you set the flag:

public class Startup {
  public void ConfigureServices(IServiceCollection services)
  {
    services.AddAuthentication()
      .AddOpenIdConnect(options => {
        options.SaveTokens = true;
        //...set other options
      });

    //...other service config
  }
}

This does have one important downside though: you’re now storing much more information in your auth cookie and that adds extra data to every client request, maybe even doubling the cookie size.

Whether or not this is a problem for your app is another decision – at least your logout redirect works now!

Advertisements

Generic behaviour in ASP.NET Core with Action Filters

Everyone hates copy/pasting code, and Action Filters in ASP.NET Core MVC offer a great way to avoid filling your controllers with boilerplate.

Filters offer you entry points into the execution pipeline for an action where you can examine the incoming parameters or generated results and modify them to suit your needs.

Here are a couple of examples of how this can help.

Treat a null result as a 404 Not Found

By default an ASP.NET Core controller will return a 204 No Content response if you return null from an action:

[Route("api/example")]
public class ExampleApiController : Controller
{
  [HttpGet("")]
  public string GetExample()
  {
    return null;
  }
}

In some cases, however, you might not want to treat null as No Content. If your API is looking up a resource by ID, for example, then a 404 Not Found response would be more useful:

[HttpGet("{id}")]
public MyDtoObject GetById(int id)
{
  if (!_store.ContainsId(id))
    return null; //should return 404

  //...
}

We can use an action filter to automatically replace the null result with a NotFoundResult:

//filter
public class NullAsNotFoundAttribute : ActionFilterAttribute
{
  public override void OnActionExecuted(ActionExecutedContext context)
  {
    var objectResult = context.Result as ObjectResult;

    if (objectResult?.Value == null)
      context.Result = new NotFoundResult();
  }
}

//controller
[HttpGet("{id}")]
[NullAsNotFound]
public MyDtoObject GetById(int id)
{
  //...
}

Here we override OnActionExecuted to invoke our code after the action method has generated a result but before that result is processed.

If the generated result is an ObjectResult with a null value then we replace it with an empty NotFoundResult and our controller will now return a 404 response.

Treat invalid models as a 400 Bad Request

It is very common to see the following pattern in MVC controllers:

[HttpPost("")]
public IActionResult Create(MyModel model)
{
  if (!ModelState.IsValid)
    return new BadRequestObjectResult(ModelState); //or a View, or other validation behaviour

  //...eventually return created model
  return Ok(model);
}

This has 2 downsides:
* Boilerplate code in every action that needs to validate
* Return type must be IActionResult to accomodate 2 result types

The second point is fairly minor but worth noting. By exposing IActionResult instead of the concrete type we lose metadata about the action.
That metadata is useful for things like generating swagger docs, and losing it can mean you need to decorate the method with response types (though this is improved in ASP.NET Core 2.1 with IActionResult).

In any case, we can make this behaviour generic by moving the validation check into another action filter:

public class InvalidModelAsBadRequestAttribute : ActionFilterAttribute
{
  public override void OnActionExecuting(ActionExecutingContext context)
  {
    if (!context.ModelState.IsValid)
      context.Result = new BadRequestObjectResult(context.ModelState);
  }
}

This time we are overriding OnActionExecuting instead of OnActionExecuted so our code gets run before the controller action. We can tell if the model is invalid before hitting our controller so we can skip the action entirely if we know it should be replaced with a 400.

Other Possibilities

Wherever you find yourself writing duplicate code in many actions it is worth considering whether it can be pulled out into a filter (or middleware) to keep your controllers clean and focussed on their intent.

Finding Freedom in “JavaScript Fatigue”

A lot of people have spoken about “JavaScript fatigue”: the idea that there are so many new frameworks, tools and ideas available to the average JavaScript developer that it’s impossible to keep up. I thought I’d add my opinion.

When I started learning JavaScript it used to be that I would try to keep up with everything. I suspect now that I just didn’t know how much was out there, but it really felt like that was an acheivable target. I would make a real effort to not only read up on new frameworks & libraries but to try them out: maybe a quick tutorial, maybe a few introductory posts, maybe even a small project.

Now, things have changed and it is obvious to most of us that there is no way you can invest that much time in every new thing that comes out.

For me, this is not a bad thing. In fact, I find it pretty liberating.

The whole situation reminds me a little bit of when I first joined Twitter. I was following maybe 20 people and I would make a real effort to read every single tweet. Ridiculous, right? But still I tried. Then I started following more people and then more people and with every extra piece of content it became less and less realistic to get through everything.

So I let go. I had to.

I couldn’t keep up with everything so I stopped trying to do the impossible and learned to let the mass of information wash over me. If something particularly catches my eye then I can read up on it but if I miss something? Who really cares?

Nowadays it feels the same with JavaScript frameworks. I may never have a chance to get my hands dirty with everything that comes out. In fact, I may never even hear of some of them. But I don’t worry any more about trying to keep up and if something really is the next big thing… well, I’m pretty sure I’ll hear about it soon enough.

Cleaning up Resources using MutationObserver

Cleaning up resources?

Let’s say you’ve written a shiny new component in your favorite framework and somewhere along the way you’ve allocated a resource that cannot be automatically cleaned up by the browser.

Maybe you attached an event handler to the resize event on the window.  Maybe you passed a callback to a global object.  Whatever the case, you need to tidy up those resources at some point.

Easy enough, right?  Put a dispose method on our object to clean up it’s leftovers and make sure it’s called before the object is discarded.

Problem solved?

Problem not quite solved

What if, for whatever reason, your component doesn’t have control over the parent?  You could trust that the user will do the right thing and call dispose for you but you can’t guarantee it.

As an alternative, can we automatically clean up our resources as soon as our containing DOM element is removed?

Yes.  Yes we can.

Using MutationObserver

The MutationObserver API (which has pretty great browser support) lets you listen to changes made to a DOM node and react to it.  We can use it here to perform our cleanup.

When we create an instance of MutationObserver we specify a callback that gets details of changes made to the parent.  If those changes include the removal of our target element then we can call dispose.

Here we are observing the parent of our target node, not the node itself (which would not be notified if removed).  We need to specify { childList: true } as the second parameter to be notified of additions and removals of child items.

Disposing the Observer

Finally, we need to make sure that the observer itself doesn’t cause a memory leak!  The observer is connected to the parentElement which (we assume) will still be hanging around, so we need to make sure that we disconnect it as part of disposal.

With everything pulled together the final version looks like this…

Supporting SignalR Client Handlers after Connection Start

(Yes, that is a pretty specific post title but then this is a pretty specific problem…)

In general, when you create a new SignalR connection you are obliged to have already defined any of your handlers on the connection.yourHubName.client object. This allows SignalR to discover those handlers and hook them up to the incoming messages.

Problem: Multiple connection sources

This approach is fine as long as you have a single place from which you are starting your connection but what if you have 2 hubs, 2 separate client handlers…2 of everything?

They will both automatically share a SignalR connection so you can end up with a bit of a race condition where the first handler to start the connection will be the only handler registered.  Imagine the following handlers…

function MyFirstHandler() {
  //assign the handler
  $.connection.myHub1.client.method1= function() { ... };

  //start the connection
  $.connection.myHub1.connection.start();
}

function MySecondHandler() {
  //assign the handler
  $.connection.myHub2.client.method2= function() { ... };

  //start the connection
  $.connection.myHub2.connection.start();
}

//...some time later...
new MyFirstHandler()
//...and even later still...
new MySecondHandler()

By the time we create MySecondHandler we have already created the connection and so method2 is not attached and will never be invoked.

Solution: Proxy implementation

We can work around this by replacing the connection.yourHubName.client object (normally just a POJO) with something that is aware of the available server methods.  The new client then exposes stubs to which SignalR can connect before our MySecondHandler can provide the “real” handler implementations.

//before creating any handlers
$.connection.myHub1.client = new SignalRClient(['method1','otherHandler']);
$.connection.myHub2.client = new SignalRClient(['method2']);

The SignalRClient implementation has 3 requirements for each named handler:

  1. Always return a valid handler function for SignalR to bind, even if the real handler hasn’t been assigned yet
  2. If the real handler has been assigned, invoke that when the handler is invoked (with all args etc.)
  3. Allow client.myHandler = function(){} assignments for compatibility with existing code

The last requirement means that we need to use Object.defineProperty with custom getter and setter implementations.  The getter should always return a stub method; the setter should store the real handler; and the stub method should invoke the real handler (if assigned).

function SignalRClient(methods) {
	this._handlers = {};
	methods.forEach(this.registerHandler.bind(this));
}

SignalRClient.prototype.invokeHandler = function(name) {
	var handler = this._handlers[name];
	if (handler) {
		var handlerArgs = Array.prototype.slice.call(arguments, 1);
		handler.apply(this, handlerArgs);
	}
}

SignalRClient.prototype.registerHandler = function(name) {
	var getter = this.invokeHandler.bind(this, name);
	Object.defineProperty(this, name, {
		enumerable: true,
		get: function() { return getter },
		set: function (value) { this._handlers[name] = value; }.bind(this)
	});
}

Note that our defined properties must also be marked as enumerable so that the SignalR code picks up on them when it attempts to enumerate the client handler methods.

Now – provided we know the available methods up front – we can start the connection whenever we like and assign our handlers later!

Hiding ProxyApi Routes from Web API Help Pages

If you are using ProxyApi and you have tried out the Web API Help Pages feature then you will have noticed a bunch of duplicate routes showing up for all of your actions that look something like this:

GET /api/{proxy}/Controller/Action?foo=bar

ProxyApi needs to be certain of the Route-to-Controller/Action mapping in order to correctly generate the JavaScript proxies, and it achieves this by inserting a custom route at the start of the route table so that it will always take precedence (if matched).

Unfortunately the Web API ApiExplorer finds these routes, maps them to the action and generates a duplicate route for every action in your API!

Getting Rid of the Routes

Thankfully it is very simple to filter these out.  When you add the Web API help pages package to your project it will generate a LOT of code that builds and renders the help page content.  This gives you plenty of entry points in which you can intercept and hide the ProxyApi-specific routes.

For our purposes here we can subclass the ApiExplorer class and filter out any route that contains “{proxy}”.

public class CustomApiExplorer : ApiExplorer
{
  public CustomApiExplorer(HttpConfiguration config) : base(config)
  {}

  public override bool ShouldExploreAction(string actionVariableValue, HttpActionDescriptor actionDescriptor, IHttpRoute route)
  {
    if (route.RouteTemplate.ToLower().Contains("{proxy}"))
      return false;

    return base.ShouldExploreAction(actionVariableValue, actionDescriptor, route);
  }
}

Now we just need to plug this implementation in instead of the default…

//in your help page configuration
config.Services.Replace(typeof(IApiExplorer), new CustomApiExplorer(config));

…and we’re done!

Selenium: Early Thoughts on Test Automation

I have recently been running a trial of Selenium to automate some of our regression and integration testing. I have only been looking into this for a short amount of time so I am by no means an expert but this post contains a few of my observations so far.

For those of you that are not familiar with it, Selenium is a browser automation system that allows you to write integration tests to control a browser and check the response of your site. An example of a Selenium script might look like this:

  1. Open the browser
  2. Browse to the login page
  3. Enter “user 1” in the input with ID #username
  4. Enter “pa$$word” in input with ID #password
  5. Click the Login button and wait for the page to load
  6. Check that the browser has navigated to the users home page

Selenium as a framework comes in 2 flavours: IDE & WebDriver.

Selenium IDE

IDE uses a record-and-playback system to define the script and to run the tests. It is implemented as a FireFox plugin and is therefore limited to FireFox only.

We had run a previous trial using this version where we attempted to have our QA team record and execute scripts as part of functional and regression testing. We found that this had a number of problems and eventually abandoned the trial:

  • Limited to FireFox
  • Has to be run manually (i.e. Cannot be run automatically on a build server)
  • Often requires some basic understanding of JavaScript or CSS selectors to work through a problem in a script; this was sometimes beyond the technical knowledge of our QA team
  • Automatically-generated selectors are often extremely fragile. Instead of input#password, it might generate body > div.main-content > form > input:last-child. This meant that a lot of time was lost to maintenance and that the vast majority of “errors” reported by the script were actually incorrect selectors.

We decided that there we too many disadvantages with this option and so moved onto Selenium WebDriver.

Selenium WebDriver

WebDriver requires that all scripts are written in the programming language of your choice. This forced the script-writing task onto our development team instead of QA, but also meant that development best-practices could be employed to improve the quality and maintainability of the scripts.

This version of Selenium also (crucially) supports multiple browsers and can be run as part of an automated nightly build so seemed like a much better fit.

Whilst writing our first few Selenium tests we came up with a few thoughts on the structure

Use a Base Fixture for Multiple Browser Testing

This is a nice simple one – we did not want to write duplicate tests for all browsers so we made use of the Generic Test Fixture nUnit feature to automatically run our tests in the 4 browsers in which we were interested.

We created a generic base fixture class for all our tests and decorated it with the TestFixture<TDriver> attribute. This instructs nUnit to instantiate and run the class for each of the specified generic types, which in turn means any test we write in such a fixture will automatically be run against each browser

[TestFixture(typeof(ChromeDriver))]
[TestFixture(typeof(InternetExplorerDriver))]
[TestFixture(typeof(FirefoxDriver))]
public abstract class SeleniumTestFixtureBase<TWebDriver>
	where TWebDriver : IWebDriver
{
	protected IWebDriver Driver { get; private set; }

	[SetUp]
	public void CreateDriver()
	{
		this.Driver = DriverFactory.Instance
			.CreateWebDriver<TWebDriver>();
			
		//...
	}
}

This does have some disadvantages when it comes to debugging tests as there are always 4 tests with the same method name but this has only been a minor inconvenience so far – the browser can be determined from the fixture class name where needed.

Wrap Selectors in a “Page” Object

The biggest problem with our initial trial of “record and playback” automated tests was the fragility of our selectors. Tests would regularly fail when manual testing would demonstrate the feature clearly working, and this was almost always due to a subtle change in the DOM structure.

If your first reaction to a failing test is to say “the test is probably broken” then your tests are useless!

A part of the cause was that the “record” part of the feature does not always select the most sensible selector to identify the element. We assumed that by hand-picking selectors we would automatically improve the robustness (is that a word?) of our selectors, but in the case where they did change we still did not want to have to update a lot of places. Similarly, we did not want to have to work out what a selector was trying to identify when debugging tests.

Our solution to this was to create a “Page” object to wrap the selectors for each page on the site in meaningfully named methods. For example, our LoginPage class might look like this:

public class LoginPage
{
	private IWebDriver _driver;

	public LoginPage(IWebDriver driver)
	{
		_driver = driver;
	}

	public IWebElement UsernameInput()
	{
		return _driver.FindElement(By.CssSelector("#userName"));
	}

	public IWebElement PasswordInput()
	{
		return _driver.FindElement(By.CssSelector("#Password"));
	}
}

This has a number of advantages:

  • Single definition of the selector for a given DOM element
    We only ever define each element once
  • Page inheritance
    We can create base pages that expose page elements which appear on multiple pages (e.g. the main navigation or the user settings menu)
  • Creating helper methods
    When we repeat blocks of functionality (e.g. Enter [usename], enter [password] then click Submit) we are able to encapsulate them on the Page class instead of private methods within the test fixture.

We also created factory extension methods on the IWebDriver element to improve readability

public static class LoginPageFactory
{
	public static LoginPage LoginPage(this IWebDriver driver)
	{
		return new LoginPage(driver);
	}
}

//...
this.Driver.LoginPage().UsernameInput().Click()

Storing Environment Information

We decided to store our environmental variables in code to improve reuse and readability. This is only a minor point but we did not want to have any URLs, usernames or configuration options hard coded in the tests.

We structured our data so we could reference variables as below:

TestEnvironment.Users.AdminUsers[0].Username

Switching between Debug & Release Mode

By storing environment variables in code we created another problem: how to switch between running against the test environment and against the local developer environment.

We solved this by loading certain changeable elements of our configuration from .config files based on a #DEBUG flag

Other Gotchas

  • The 64bit IE driver for Selenium IDE is incredibly slow! Uninstall it and install the 32-bit one
  • Browser locale can – in most cases – be set using a flag when creating the driver. One exception to this is Safari for Windows, which does not seem to allow you to change the locale at all – even through Safari itself!

Summary

We are still in the early phases of this trial but it is looking like we will be able to make Selenium automation a significant part of our testing strategy going forward.

Hopefully these will help out other people. If you have any suggestions of your own then leave them in the comments on message me on Twitter (@stevegreatrex).