Stop Agonising Over Tiny Details

I recently started a new side project using React and – having not used React in anger before – wanted to get off on the right foot.

And by “on the right foot” I apparently decided that I meant “with the perfect folder structure for my code”.

Obviously we can all agree that folder structure is the most important predictor of code quality  (<sarcasm/>) so I spent hours agonising over whether to separate  components from screens or whether to group all actions together in an actions folder and a hundred other basically-irrelevant details. I read blog posts. I tried template project generators. I poured over a huge number of “sample” GitHub repos (every one different, of course).

To reiterate: this was for a new, greenfield side project. I spent hours of my limited time worrying about where to put all the code instead of…writing any of that code!


Hence this post. This post sits here as a reminder to myself to STOP AGONISING OVER UNIMPORTANT THINGS.

And you know what? I changed the entire structure after a week anyway.


Finding Freedom in “JavaScript Fatigue”

A lot of people have spoken about “JavaScript fatigue”: the idea that there are so many new frameworks, tools and ideas available to the average JavaScript developer that it’s impossible to keep up. I thought I’d add my opinion.

When I started learning JavaScript it used to be that I would try to keep up with everything. I suspect now that I just didn’t know how much was out there, but it really felt like that was an acheivable target. I would make a real effort to not only read up on new frameworks & libraries but to try them out: maybe a quick tutorial, maybe a few introductory posts, maybe even a small project.

Now, things have changed and it is obvious to most of us that there is no way you can invest that much time in every new thing that comes out.

For me, this is not a bad thing. In fact, I find it pretty liberating.

The whole situation reminds me a little bit of when I first joined Twitter. I was following maybe 20 people and I would make a real effort to read every single tweet. Ridiculous, right? But still I tried. Then I started following more people and then more people and with every extra piece of content it became less and less realistic to get through everything.

So I let go. I had to.

I couldn’t keep up with everything so I stopped trying to do the impossible and learned to let the mass of information wash over me. If something particularly catches my eye then I can read up on it but if I miss something? Who really cares?

Nowadays it feels the same with JavaScript frameworks. I may never have a chance to get my hands dirty with everything that comes out. In fact, I may never even hear of some of them. But I don’t worry any more about trying to keep up and if something really is the next big thing… well, I’m pretty sure I’ll hear about it soon enough.


Cleaning up Resources using MutationObserver

Cleaning up resources?

Let’s say you’ve written a shiny new component in your favorite framework and somewhere along the way you’ve allocated a resource that cannot be automatically cleaned up by the browser.

Maybe you attached an event handler to the resize event on the window.  Maybe you passed a callback to a global object.  Whatever the case, you need to tidy up those resources at some point.

Easy enough, right?  Put a dispose method on our object to clean up it’s leftovers and make sure it’s called before the object is discarded.

Problem solved?

Problem not quite solved

What if, for whatever reason, your component doesn’t have control over the parent?  You could trust that the user will do the right thing and call dispose for you but you can’t guarantee it.

As an alternative, can we automatically clean up our resources as soon as our containing DOM element is removed?

Yes.  Yes we can.

Using MutationObserver

The MutationObserver API (which has pretty great browser support) lets you listen to changes made to a DOM node and react to it.  We can use it here to perform our cleanup.

When we create an instance of MutationObserver we specify a callback that gets details of changes made to the parent.  If those changes include the removal of our target element then we can call dispose.

Here we are observing the parent of our target node, not the node itself (which would not be notified if removed).  We need to specify { childList: true } as the second parameter to be notified of additions and removals of child items.

Disposing the Observer

Finally, we need to make sure that the observer itself doesn’t cause a memory leak!  The observer is connected to the parentElement which (we assume) will still be hanging around, so we need to make sure that we disconnect it as part of disposal.

With everything pulled together the final version looks like this…

Supporting SignalR Client Handlers after Connection Start

(Yes, that is a pretty specific post title but then this is a pretty specific problem…)

In general, when you create a new SignalR connection you are obliged to have already defined any of your handlers on the connection.yourHubName.client object. This allows SignalR to discover those handlers and hook them up to the incoming messages.

Problem: Multiple connection sources

This approach is fine as long as you have a single place from which you are starting your connection but what if you have 2 hubs, 2 separate client handlers…2 of everything?

They will both automatically share a SignalR connection so you can end up with a bit of a race condition where the first handler to start the connection will be the only handler registered.  Imagine the following handlers…

function MyFirstHandler() {
  //assign the handler
  $.connection.myHub1.client.method1= function() { ... };

  //start the connection

function MySecondHandler() {
  //assign the handler
  $.connection.myHub2.client.method2= function() { ... };

  //start the connection

//...some time later...
new MyFirstHandler()
//...and even later still...
new MySecondHandler()

By the time we create MySecondHandler we have already created the connection and so method2 is not attached and will never be invoked.

Solution: Proxy implementation

We can work around this by replacing the connection.yourHubName.client object (normally just a POJO) with something that is aware of the available server methods.  The new client then exposes stubs to which SignalR can connect before our MySecondHandler can provide the “real” handler implementations.

//before creating any handlers
$.connection.myHub1.client = new SignalRClient(['method1','otherHandler']);
$.connection.myHub2.client = new SignalRClient(['method2']);

The SignalRClient implementation has 3 requirements for each named handler:

  1. Always return a valid handler function for SignalR to bind, even if the real handler hasn’t been assigned yet
  2. If the real handler has been assigned, invoke that when the handler is invoked (with all args etc.)
  3. Allow client.myHandler = function(){} assignments for compatibility with existing code

The last requirement means that we need to use Object.defineProperty with custom getter and setter implementations.  The getter should always return a stub method; the setter should store the real handler; and the stub method should invoke the real handler (if assigned).

function SignalRClient(methods) {
	this._handlers = {};

SignalRClient.prototype.invokeHandler = function(name) {
	var handler = this._handlers[name];
	if (handler) {
		var handlerArgs =, 1);
		handler.apply(this, handlerArgs);

SignalRClient.prototype.registerHandler = function(name) {
	var getter = this.invokeHandler.bind(this, name);
	Object.defineProperty(this, name, {
		enumerable: true,
		get: function() { return getter },
		set: function (value) { this._handlers[name] = value; }.bind(this)

Note that our defined properties must also be marked as enumerable so that the SignalR code picks up on them when it attempts to enumerate the client handler methods.

Now – provided we know the available methods up front – we can start the connection whenever we like and assign our handlers later!

Moq-ing Dynamics

This post serves as a reminder to myself…largely because I have wasted time tracking this down twice now!

When you are mocking an interface that returns a dynamic object, moq is (as ever) your friend

public interface ISomething {
    dynamic GetSomething();

Using the standard moq syntax, you can very easily mock this call to return a real object..

    var theThing = new Mock<ISomething>();
    var mockInstance = new SomeMockClass();
    theThing,Setup(t => t.GetSomething()).Returns(mockInstance);

This is a pretty common pattern, but there’s an important gotcha to note: if you the C# runtime binder can’t see the type SomeMockClass then when your target code tries to evaluate the return value you’re going to get an error…

    'object' does not something something about GetSomething()

But you aren’t returning an instance of object are you. So why can’t it work out what you’re aiming for?

Turns out that it’s pretty simple. For the dynamic binder to pick up your mock type, it has to be able to see the type. Is your mock type publicly visible? Thought not.

Make your private mock class publicly visible and suddenly the runtime binder knows what you’re talking about!

Autofac and Async Resources

I came across a problem on a recent WebAPI project where I wanted to use Autofac to inject some tenant information (i.e. derived per request) into the constructor of each controller:

public class MyController : ApiController
  public MyController(TenantInformation tenantInfo)

The problem was that the TenantInformation had to be sourced from an async API call

var tenantInfo = await tenantApi.GetTenantInfoAsync();

This means that you cannot implement something like the below to register the component

static void Main(string[] args)
  var builder = new ContainerBuilder();

  builder.Register(context => context.Resolve<TenantApi>().GetTenantInfo());

  var container = builder.Build();
  var example = container.Resolve<ExampleController>();
  // --> throws 'Autofac.Core.Registration.ComponentNotRegisteredException'

On closer examination of container we can see that TenantInfo has not been registered; instead we have registered an instance of Task<TenantInfo>.  We can await this but not from a constructor.
One option that I briefly considered was importing the service directly into each controller and then getting the value within each async action method that required it.  This works but it feels messy and against the point of DI.  I want to be able to depend on my dependencies; not on the providers of my dependencies.

Using a Mediator

My solution was to create a mediator object representing an asynchronously-resolved component:

interface IAsyncRegistration
  Task Resolve(IComponentContext context);

class AsyncRegistration<T> : IAsyncRegistration
  private Func<IComponentContext, Task<T>> _resolve;

  public AsyncRegistration(Func<IComponentContext, Task<T>> resolve)
    _resolve = resolve;

  public bool Resolved { get; private set; }

  public T Value { get; private set; }

  public async Task Resolve(IComponentContext context)
    this.Value = await _resolve(context);
    this.Resolved = true;

This class wraps an resolution function for the type, the resolved value and a flag to indicate whether or not it has been resolved. It also implements a non-generic interface so we can find all instances of AsyncRegistration<T> regardless of T.

public static IRegistrationBuilder<T, SimpleActivatorData, SingleRegistrationStyle> RegisterAsync<T>(this ContainerBuilder builder, Func<IComponentContext, Task<T>> resolve)
  builder.RegisterInstance(new AsyncRegistration<T>(resolve))

  return builder.Register<T>(context =>
    var asyncRegistration = context.Resolve<AsyncRegistration<T>>();
    if (!asyncRegistration.Resolved)
      throw new DependencyResolutionException($"Async component {typeof(T).Name} has not been resolved");

    return asyncRegistration.Value;

Next I created an extension method for ContainerBuilder that adds 2 registrations:

  1. A registration of AsyncRegistration<T>
  2. A registration of <T> that resolves the AsyncRegistration<T>, checks that it has been resolved and then returns the result

Finally I created an extension method that can be called on the container from anywhere within an async block that will resolve all of the values

public static Task ResolveAsyncRegistrations(this IComponentContext context)
  var registrations = context.Resolve<IEnumerable<IAsyncRegistration>>();
  return Task.WhenAll(registrations.Select(r => r.Resolve(context)));

All together this means that the following will work and we can now inject asynchronously-resolved services into controller constructors:

var builder = new ContainerBuilder();
builder.RegisterAsync(context =&gt; context.Resolve&lt;TenantApi&gt;().GetTenantInfo());

var container = builder.Build();

// an async block...
await container.ResolveAsyncRegistrations();

//...then some time later...
var tenantInfo = container.Resolve<TenantInfo>();

Plugging in to WebAPI

The easiest way to plug this in to the WebAPI pipeline is to create a message handler that

  1. Gets an IComponentContext for the current request
  2. awaits a call to the ResolveAsyncRegistrations extension method
public class AsyncRegistrationHandler : DelegatingHandler
  protected override async Task&lt;HttpResponseMessage&gt; SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    var scope = request.GetDependencyScope().GetRequestLifetimeScope();
    await scope.RegisterAsyncComponents();

    return await base.SendAsync(request, cancellationToken);


This system works for my particular scenario but there are a lot of possible situations where this would not work or would need extending.  The lifetime management of the dependencies, for example, is very rigid in this implementation and would need some work to be exposed properly.


Microsoft Band 2: Micro Review

I, like most of the human race, started 2016 with an an absolute conviction to improve my fitness and I, like a decent percentage of people, decided that the best way to fool myself into following through was to invest in a fitness tracker.

I had looked into various options in the past but never really felt that there was a product out that there that ticked all the boxes.  When Microsoft released the second iteration of their fitness band – promising sleep monitoring, GPS run tracking and more sensors than I know what to do with – I thought it was time to take a punt.

I have now been using the Microsoft Band 2 for two weeks and felt it was about time to share my thoughts.


My biggest concern with any fitness tracker was always that it would not be comfortable enough that I would actually wear it.  The first couple of days after switching from a traditional watch certainly felt a bit strange: the band is bulkier than anything I had worn before and would quite often get caught on cuffs, but it didn’t take long before it felt pretty comfortable.

The Microsoft band has been designed so that you wear the “watch face” on the inside of your wrist and once you adjust to this it feels very natural.  The alignment of the text (being wider than it is tall) is almost impossible to comfortably read with the face on the outside of the wrist and it takes very little time to adjust.

You can have the watch display on constantly but I have gone with the “rotate on” mode where you flick your wrist to light up the display.  This works well but has a little more delay than I’d like when quickly checking the time.


Sleep tracking was one of the key features for me as I have always been interested in the quality of my sleep.  The Microsoft Band promised to deliver in-depth monitoring as well an “optimum wake up” alarm and so far I have been very impressed.  The app (running on Android) gives genuinely interesting feedback on how I have slept every morning, along with recommendations on how to improve the quality of my rest (e.g. “you are taking a long time to fall asleep; try avoiding mental stress late at night”).

The alarm appears to work very well; I have not been using it for too long but so far it seems to wake me up when I feel more awake than would a normal alarm.  It also has the significant benefit of being silent – you are woken up by the band vibrating on your wrist – which has proven very popular with my wife when I have an early start!


Many years ago I treated myself to a Garmin GPS watch for running.  It was about the size of a small matchbox strapped to your arm and came with a chest strap to track your heart rate whilst running.  At the time it was very impressive and I probably shouldn’t be surprised that the Microsoft Band has improved upon 5-year-old technology, but the step up seems very marked.

The band tracks your heart rate, pace, distance (with or without GPS) and gives you up to 7 customisable data points on your wrist while you run.  It seems pretty accurate as these things go, and the feedback – both live and through the app after your run – is useful.  It integrates with various other apps like RunKeeper and MyFitnessPal as well, so your pace, distance and calorie burn records are still all replicated where they always were before.

A couple of tips for the first time you go out though: firstly, wait for the band to get GPS lock before you hit the road.  It will tell you that it can pick up GPS as you run but has not managed to do so over a quick 5k for me when I tried.  Secondly, I would recommend avoiding long sleeves when running.  The inside-of-the-wrist setup works very well if you’re in short sleeves but trying to pull you sleeve up to view the numbers on the inside felt very uncomfortable when I was out running.

Smart Watch Features

Compared to things like the fitbit or jawbone offerings, the Microsoft Band has a number of smart-watch-esque features that seemed pretty tempting to me when I bought it.  You can have SMS, emails, call, calendar and other notifications delivered to your wrist over bluetooth and generally this works really well.  If you turn on “other notifications” it can get a little bit silly – on one occasion I received by-the-minute updates on the charging status of my phone – but you have the option to filter which apps are able to push notifications to the band so you can make it useful.  It’s a nice feature to have when there is no native support for things like whatsapp or slack: you can still get the notifications on your wrist; you just lose the ability to reply.

For things like calls, SMS and email the ability to send canned responses is surprisingly useful when sat in meetings.  You can customise the available replies and – if you really want – you can even type out custom responses with an on-band keyboard (though I wouldn’t recommend it for anything more than a word or three).

The only issue I have with the smart watch functionality is that it seems to make a real difference to the battery life.  It’s nice to have, but I bought this as a fitness tracker and find myself turning off the extra features to get a few extra hours of power.  That leads me on to…

Battery Life

Microsoft advertise the Band 2 as having 48h of battery life and whilst I wouldn’t say this is completely off the mark it does seems a little generous.  If I have the smart watch features turned on then I am lucky to get a day and a half of wear out of it.

With my phone I have fallen into the pattern of leaving it on charge overnight but the complication with the band is that I want to be wearing it overnight for the sleep tracking.  This removes the natural time that you would charge the device and makes the planning of charging a bit of a challenge.

What makes life a lot easier is that the band charges incredibly quickly.  It only takes around half an hour to get up to full charge from close to zero so I find myself falling into a pattern of plugging in the band whilst I get dressed in the morning.  Couple that with the odd ad-hoc charge at my desk and I’ve not had any real down time.  As a system it’s just about working, but it does feel like I may be missing out on some of the features in the interest of keeping the thing running.


Overall I’m very happy with the band and would gladly recommend it.  There are a couple of rough edges to be smoothed out but they don’t take away from the core functionality of  a fitness band and for that specific job it is doing everything I can ask of it.

The integration with other apps is nicely done and works very well.  The API for the cloud data store looks promising as well, though that is an investigation for another day…