Finding Freedom in “JavaScript Fatigue”

A lot of people have spoken about “JavaScript fatigue”: the idea that there are so many new frameworks, tools and ideas available to the average JavaScript developer that it’s impossible to keep up. I thought I’d add my opinion.

When I started learning JavaScript it used to be that I would try to keep up with everything. I suspect now that I just didn’t know how much was out there, but it really felt like that was an acheivable target. I would make a real effort to not only read up on new frameworks & libraries but to try them out: maybe a quick tutorial, maybe a few introductory posts, maybe even a small project.

Now, things have changed and it is obvious to most of us that there is no way you can invest that much time in every new thing that comes out.

For me, this is not a bad thing. In fact, I find it pretty liberating.

The whole situation reminds me a little bit of when I first joined Twitter. I was following maybe 20 people and I would make a real effort to read every single tweet. Ridiculous, right? But still I tried. Then I started following more people and then more people and with every extra piece of content it became less and less realistic to get through everything.

So I let go. I had to.

I couldn’t keep up with everything so I stopped trying to do the impossible and learned to let the mass of information wash over me. If something particularly catches my eye then I can read up on it but if I miss something? Who really cares?

Nowadays it feels the same with JavaScript frameworks. I may never have a chance to get my hands dirty with everything that comes out. In fact, I may never even hear of some of them. But I don’t worry any more about trying to keep up and if something really is the next big thing… well, I’m pretty sure I’ll hear about it soon enough.

Advertisements
mutationobserver

Cleaning up Resources using MutationObserver

Cleaning up resources?

Let’s say you’ve written a shiny new component in your favorite framework and somewhere along the way you’ve allocated a resource that cannot be automatically cleaned up by the browser.

Maybe you attached an event handler to the resize event on the window.  Maybe you passed a callback to a global object.  Whatever the case, you need to tidy up those resources at some point.

Easy enough, right?  Put a dispose method on our object to clean up it’s leftovers and make sure it’s called before the object is discarded.

Problem solved?

Problem not quite solved

What if, for whatever reason, your component doesn’t have control over the parent?  You could trust that the user will do the right thing and call dispose for you but you can’t guarantee it.

As an alternative, can we automatically clean up our resources as soon as our containing DOM element is removed?

Yes.  Yes we can.

Using MutationObserver

The MutationObserver API (which has pretty great browser support) lets you listen to changes made to a DOM node and react to it.  We can use it here to perform our cleanup.

When we create an instance of MutationObserver we specify a callback that gets details of changes made to the parent.  If those changes include the removal of our target element then we can call dispose.

Here we are observing the parent of our target node, not the node itself (which would not be notified if removed).  We need to specify { childList: true } as the second parameter to be notified of additions and removals of child items.

Disposing the Observer

Finally, we need to make sure that the observer itself doesn’t cause a memory leak!  The observer is connected to the parentElement which (we assume) will still be hanging around, so we need to make sure that we disconnect it as part of disposal.

With everything pulled together the final version looks like this…

Supporting SignalR Client Handlers after Connection Start

(Yes, that is a pretty specific post title but then this is a pretty specific problem…)

In general, when you create a new SignalR connection you are obliged to have already defined any of your handlers on the connection.yourHubName.client object. This allows SignalR to discover those handlers and hook them up to the incoming messages.

Problem: Multiple connection sources

This approach is fine as long as you have a single place from which you are starting your connection but what if you have 2 hubs, 2 separate client handlers…2 of everything?

They will both automatically share a SignalR connection so you can end up with a bit of a race condition where the first handler to start the connection will be the only handler registered.  Imagine the following handlers…

function MyFirstHandler() {
  //assign the handler
  $.connection.myHub1.client.method1= function() { ... };

  //start the connection
  $.connection.myHub1.connection.start();
}

function MySecondHandler() {
  //assign the handler
  $.connection.myHub2.client.method2= function() { ... };

  //start the connection
  $.connection.myHub2.connection.start();
}

//...some time later...
new MyFirstHandler()
//...and even later still...
new MySecondHandler()

By the time we create MySecondHandler we have already created the connection and so method2 is not attached and will never be invoked.

Solution: Proxy implementation

We can work around this by replacing the connection.yourHubName.client object (normally just a POJO) with something that is aware of the available server methods.  The new client then exposes stubs to which SignalR can connect before our MySecondHandler can provide the “real” handler implementations.

//before creating any handlers
$.connection.myHub1.client = new SignalRClient(['method1','otherHandler']);
$.connection.myHub2.client = new SignalRClient(['method2']);

The SignalRClient implementation has 3 requirements for each named handler:

  1. Always return a valid handler function for SignalR to bind, even if the real handler hasn’t been assigned yet
  2. If the real handler has been assigned, invoke that when the handler is invoked (with all args etc.)
  3. Allow client.myHandler = function(){} assignments for compatibility with existing code

The last requirement means that we need to use Object.defineProperty with custom getter and setter implementations.  The getter should always return a stub method; the setter should store the real handler; and the stub method should invoke the real handler (if assigned).

function SignalRClient(methods) {
	this._handlers = {};
	methods.forEach(this.registerHandler.bind(this));
}

SignalRClient.prototype.invokeHandler = function(name) {
	var handler = this._handlers[name];
	if (handler) {
		var handlerArgs = Array.prototype.slice.call(arguments, 1);
		handler.apply(this, handlerArgs);
	}
}

SignalRClient.prototype.registerHandler = function(name) {
	var getter = this.invokeHandler.bind(this, name);
	Object.defineProperty(this, name, {
		enumerable: true,
		get: function() { return getter },
		set: function (value) { this._handlers[name] = value; }.bind(this)
	});
}

Note that our defined properties must also be marked as enumerable so that the SignalR code picks up on them when it attempts to enumerate the client handler methods.

Now – provided we know the available methods up front – we can start the connection whenever we like and assign our handlers later!

maintaining-class-context

Maintaining Context in TypeScript classes

TypeScript is generally pretty good at persisting this in functions but there are certain circumstances where you can (either accidentally or deliberately) get a class function to run in the wrong context.

class Example {
  private name = 'class context';

  public printName() {
    console.log(this.name);
  }
}

var example = new Example();
example.printName();
// => 'class context'
example.printName.call({ 
  name: 'wrong context' 
});
// => 'wrong context'

The most common scenario where I have accidentally caused this behaviour is where a function is bound to a click handler in Knockout and is executed in the context of the DOM element instead of the containing class.

In JavaScript you can always use myFunction.bind(this) to force the context but having to do that in the TypeScript constructor feels messy…

class Example {
  private name = 'class context';

  constructor() {
    this.printName = this._printName.bind(this);
  }

  private _printName() {
    console.log(this.name);
  }
}

var example = new Example();

example.printName();
// => 'class context'
example.printName.call({ 
  name: 'wrong context' 
});
// => 'class context'

Thankfully there’s an easy way to get TypeScript to correctly play ball.  Instead of defining the function inline, assign a lambda expression to a public class variable:

class Example {
  private name = 'class context';

  public printName = () => {
    console.log(this.name);
  }
}

var example = new Example();

example.printName();
// => 'class context'
example.printName.call({ 
  name: 'wrong context' 
});
// => 'class context'

JS Bin on jsbin.com

Much neater!

Faking Mouse Events in D3

D3

D3 is a great library but one of the challenges I have found is with unit testing anything based on event handlers.

In my specific example I was trying to show a tooltip when the user hovered over an element.

hoverTargets
 .on('mouseover', showTooltip(true))
 .on('mousemove', positionTooltip)
 .on('mouseout', closeTooltip);

D3 doesn’t currently have the ability to trigger a mouse event so in order to test the behaviour I have had to roll my own very simple helper to invoke these events.

$.fn.triggerSVGEvent = function(eventName) {
 var event = document.createEvent('SVGEvents');
 event.initEvent(eventName,true,true);
 this[0].dispatchEvent(event);
 return $(this);
};

This is implemented as jQuery plugin that directly invokes the event as if it had come from the browser.

You can use it as below:

$point
 .triggerSVGEvent('mouseover')
 .triggerSVGEvent('mousemove');

It will probably change over time as I need to do more with it but for now this works as a way to test my tooltip behaviour.

 

library-api

Custom Operation Names with Swashbuckle 5.0

This is a post about Swashbuckle –  a .NET library that seamlessly adds Swagger support to WebAPI projects.  If you aren’t familiar with Swashbuckle then stop reading right now and go look into it – it’s awesome.

library-api

Swashbuckle has recently released version 5.0 which includes (among other things) a ridiculous array of ways to customise your generated swagger spec.

One such customisation point allows you to change the operationId (and other properties) manually against each operation once the auto-generator has done it’s thing.

Why Bother?

Good question.  For me, I decided to bother for one very specific reason: swagger-js.  This library can auto-generate a nice accessor object based on any valid swagger specification with almost no effort, whilst doing lots of useful things like handling authorization and parsing responses.

swagger-js uses the operationId property for method names and the default ones coming out of Swashbuckle weren’t really clear or consistent enough.

Injecting an Operation Filter

The means for customising operations lies with the IOperationFilter interface exposed by Swashbuckle.

public interface IOperationFilter
{
  void Apply(Operation operation, 
    SchemaRegistry schemaRegistry, 
    ApiDescription apiDescription);
}

When implemented and plugged-in (see below), the Apply method will be called for each operation located by Swashbuckle and allows you to mess around with its properties.  We have a very specific task in mind so we can create a SwaggerOperationNameFilter class for our purpose:

public class SwaggerOperationNameFilter : IOperationFilter
{
  public void Apply(Operation operation, SchemaRegistry schemaRegistry, ApiDescription apiDescription)
  {
    operation.operationId = "???";
  }
}

When you installed the Swashbuckle nuget package it will have created a SwaggerConfig file in your App_Start folder.  In this file you will likely have a long and well-commented explanation of all available configuration points, but to keep things simple we can insert the reference to our filter at the end:

GlobalConfiguration.Configuration
  .EnableSwagger(c =>
  {
    //...
    c.OperationFilter<SwaggerOperationNameFilter>();
  });

Getting the Name

At this point you have a lot of flexibility in how you generate the name for the operation.  The parameters passed in to the Apply method give you access to a lot of contextual information but in my case I wanted to manually specify the name of each operation using a custom attribute.

The custom attribute itself contains a single OperationId property…

[AttributeUsage(AttributeTargets.Method)]
public sealed class SwaggerOperationAttribute : Attribute
{
  public SwaggerOperationAttribute(string operationId)
  {
    this.OperationId = operationId;
  }

  public string OperationId { get; private set; }
}

…and can be dropped onto any action method as required…

[SwaggerOperation("myCustomName")]
public async Task<HttpResponseMessage> MyAction()
{
  //…
}

Once the attributes are in place we can pull the name from our filter using the ActionDescriptor

operation.operationId = apiDescription.ActionDescriptor
  .GetCustomAttributes<SwaggerOperationAttribute>()
  .Select(a => a.OperationId)
  .FirstOrDefault();

Voila!

RESTful Reporting with Visual Studio Online

image

My team uses Visual Studio Online for work item tracking and generally speaking it has pretty good baked-in reporting.  I can see an overview of the current sprint, I can see capacity and I can see the burndown.  One area that I’ve always felt it was missing, however, is a way to analyse the accuracy of our estimations.

We actually make pretty good estimations, in general terms: we rarely over-commit and it’s unusual for us to add anything significant to a sprint because we’ve flown through our original stories.  This is based on a completely subjective guess at each person’s capacity and productivity which – over time – has given us a good overall figure that we know works for us.

But is that because our estimates are good, or because our bad estimates are fortuitously averaging out?  Does our subjective capacity figure still work when we take some people out of the team and replace them with others?

This is an area where the reporting within VSO falls down and the limitation boils down to one issue: there is no way to (easily) get the original estimate for a task once you start changing the remaining work.  So how can we get at this information?

Enter the API

I had seen a few articles on the integration options available for VSO but hadn’t really had a chance to look into it in detail until recently.  The API is pretty extensive and you can run pretty much any query through the API that you can access through the UI, along with a bunch of useful team-related info.  Unfortunately the API suffers the same limitation as the VSO portal, but we can work around it using a combination of a little effort and the Work Item History API.

Getting the Data

There is nothing particularly complicated about pulling the relevant data from VSO:

  1. Get a list of sprints using the ClassificationNode API to access iterations
  2. Use Work Item Query Language to build a dynamic query and get the results through the Query API.  This gives us the IDs of each Task in the sprint
  3. For each Task, use the Work Item History API to get a list of all updates
  4. Use the update history to build up a picture of the initial state of each task

Point 4 has a few caveats, however.  The history API only records the fields that have changed in each revision so we don’t always get a complete picture of the Task from a single update.  There are a few scenarios that need to be handled:

  1. Task is created in the target sprint and has a time estimate assigned at the same time.  This is then reduced during the sprint as the Task moves towards completion
  2. Task is created in the target sprint but a time estimate is assigned at a later date before having time reduced as the sprint progresses
  3. Task is created in another sprint or iteration with a time assigned, then moved to the target sprint at a later date
  4. Task is created and worked on in another sprint, then is moved to the target sprint having been partially completed

The simplest scenario (#1 above) would theoretically mean that we could take the earliest update record with the correct sprint.  However, scenario 2 means that the first record in the correct sprint would have a time estimate of zero.  Worse, because we only get changes from the API we wouldn’t have the correct sprint ID on the same revision as the new estimate: it wouldn’t have changed!

The issue with scenario 3 is similar to #2: when the Task is moved to the target sprint the time estimate isn’t changed so isn’t included in the revision.

A simplistic solution that I initially tried was to simply take the maximum historical time estimate for the task (with the assumption that time goes down as the sprint progresses, not up).  Scenario 4 puts an end to this plan as the maximum time estimate could potentially be outside of the current sprint.  If I move a task into a sprint with only half it’s work remaining, I don’t really want to see the other half as being completed in this sprint.

Calculating the Original Estimate: Solution

The solution that I eventually went with here was to iterate through every historical change to the work item and store the “current” sprint and remaining work as each change was made.  That allows us to get the amount of remaining work at each update alongside the sprint in which it occurred; from this point, taking a maximum of the remaining work values gives us a good number for the original amount of work that we estimated.

It does rely on the assumption that Tasks estimations aren’t increased after they have started work (e.g. start at 2 hours, get 1 hour done then realise there’s more work so increase back to 2) but in this scenario we tend to create new tasks instead of adjusting existing ones (we did find more work, after all) which works for us.

Tying it all Together

Once I was able to get at the data it was relatively simple to wrap a reporting service around the implementation.  I went with node & express for the server-side implementation with a sprinkling of angular on top for the client, but visualising the data wasn’t the challenge here!

With this data available I can see a clear breakdown of how different developers affect the overall productivity of the team and can make decisions off the back of this.  I have also seen that having a live dashboard displaying some of the key metrics acts as a bit of a motivator for the people who aren’t getting through the work they expect to, which can’t be a bad thing.

I currently have the following information displayed:

  • Total remaining, completed and in-progress work based on our initial estimates
  • %age completion of the work
  • Absolute leaderboard; i.e. who is getting through the most work based on the estimates
  • Adjusted leaderboard; i.e. who is getting through the most work compared to our existing subjective estimates
  • Current tasks

I hope that the VSO API eventually reaches a point that I can pull this information out without needing to write code, but it’s good to know I can get the data if I need it!