Finding Freedom in “JavaScript Fatigue”

A lot of people have spoken about “JavaScript fatigue”: the idea that there are so many new frameworks, tools and ideas available to the average JavaScript developer that it’s impossible to keep up. I thought I’d add my opinion.

When I started learning JavaScript it used to be that I would try to keep up with everything. I suspect now that I just didn’t know how much was out there, but it really felt like that was an acheivable target. I would make a real effort to not only read up on new frameworks & libraries but to try them out: maybe a quick tutorial, maybe a few introductory posts, maybe even a small project.

Now, things have changed and it is obvious to most of us that there is no way you can invest that much time in every new thing that comes out.

For me, this is not a bad thing. In fact, I find it pretty liberating.

The whole situation reminds me a little bit of when I first joined Twitter. I was following maybe 20 people and I would make a real effort to read every single tweet. Ridiculous, right? But still I tried. Then I started following more people and then more people and with every extra piece of content it became less and less realistic to get through everything.

So I let go. I had to.

I couldn’t keep up with everything so I stopped trying to do the impossible and learned to let the mass of information wash over me. If something particularly catches my eye then I can read up on it but if I miss something? Who really cares?

Nowadays it feels the same with JavaScript frameworks. I may never have a chance to get my hands dirty with everything that comes out. In fact, I may never even hear of some of them. But I don’t worry any more about trying to keep up and if something really is the next big thing… well, I’m pretty sure I’ll hear about it soon enough.

Advertisements

Cleaning up Resources using MutationObserver

Cleaning up resources?

Let’s say you’ve written a shiny new component in your favorite framework and somewhere along the way you’ve allocated a resource that cannot be automatically cleaned up by the browser.

Maybe you attached an event handler to the resize event on the window.  Maybe you passed a callback to a global object.  Whatever the case, you need to tidy up those resources at some point.

Easy enough, right?  Put a dispose method on our object to clean up it’s leftovers and make sure it’s called before the object is discarded.

Problem solved?

Problem not quite solved

What if, for whatever reason, your component doesn’t have control over the parent?  You could trust that the user will do the right thing and call dispose for you but you can’t guarantee it.

As an alternative, can we automatically clean up our resources as soon as our containing DOM element is removed?

Yes.  Yes we can.

Using MutationObserver

The MutationObserver API (which has pretty great browser support) lets you listen to changes made to a DOM node and react to it.  We can use it here to perform our cleanup.

When we create an instance of MutationObserver we specify a callback that gets details of changes made to the parent.  If those changes include the removal of our target element then we can call dispose.

Here we are observing the parent of our target node, not the node itself (which would not be notified if removed).  We need to specify { childList: true } as the second parameter to be notified of additions and removals of child items.

Disposing the Observer

Finally, we need to make sure that the observer itself doesn’t cause a memory leak!  The observer is connected to the parentElement which (we assume) will still be hanging around, so we need to make sure that we disconnect it as part of disposal.

With everything pulled together the final version looks like this…

Supporting SignalR Client Handlers after Connection Start

(Yes, that is a pretty specific post title but then this is a pretty specific problem…)

In general, when you create a new SignalR connection you are obliged to have already defined any of your handlers on the connection.yourHubName.client object. This allows SignalR to discover those handlers and hook them up to the incoming messages.

Problem: Multiple connection sources

This approach is fine as long as you have a single place from which you are starting your connection but what if you have 2 hubs, 2 separate client handlers…2 of everything?

They will both automatically share a SignalR connection so you can end up with a bit of a race condition where the first handler to start the connection will be the only handler registered.  Imagine the following handlers…

function MyFirstHandler() {
  //assign the handler
  $.connection.myHub1.client.method1= function() { ... };

  //start the connection
  $.connection.myHub1.connection.start();
}

function MySecondHandler() {
  //assign the handler
  $.connection.myHub2.client.method2= function() { ... };

  //start the connection
  $.connection.myHub2.connection.start();
}

//...some time later...
new MyFirstHandler()
//...and even later still...
new MySecondHandler()

By the time we create MySecondHandler we have already created the connection and so method2 is not attached and will never be invoked.

Solution: Proxy implementation

We can work around this by replacing the connection.yourHubName.client object (normally just a POJO) with something that is aware of the available server methods.  The new client then exposes stubs to which SignalR can connect before our MySecondHandler can provide the “real” handler implementations.

//before creating any handlers
$.connection.myHub1.client = new SignalRClient(['method1','otherHandler']);
$.connection.myHub2.client = new SignalRClient(['method2']);

The SignalRClient implementation has 3 requirements for each named handler:

  1. Always return a valid handler function for SignalR to bind, even if the real handler hasn’t been assigned yet
  2. If the real handler has been assigned, invoke that when the handler is invoked (with all args etc.)
  3. Allow client.myHandler = function(){} assignments for compatibility with existing code

The last requirement means that we need to use Object.defineProperty with custom getter and setter implementations.  The getter should always return a stub method; the setter should store the real handler; and the stub method should invoke the real handler (if assigned).

function SignalRClient(methods) {
	this._handlers = {};
	methods.forEach(this.registerHandler.bind(this));
}

SignalRClient.prototype.invokeHandler = function(name) {
	var handler = this._handlers[name];
	if (handler) {
		var handlerArgs = Array.prototype.slice.call(arguments, 1);
		handler.apply(this, handlerArgs);
	}
}

SignalRClient.prototype.registerHandler = function(name) {
	var getter = this.invokeHandler.bind(this, name);
	Object.defineProperty(this, name, {
		enumerable: true,
		get: function() { return getter },
		set: function (value) { this._handlers[name] = value; }.bind(this)
	});
}

Note that our defined properties must also be marked as enumerable so that the SignalR code picks up on them when it attempts to enumerate the client handler methods.

Now – provided we know the available methods up front – we can start the connection whenever we like and assign our handlers later!

Faking Mouse Events in D3

D3

D3 is a great library but one of the challenges I have found is with unit testing anything based on event handlers.

In my specific example I was trying to show a tooltip when the user hovered over an element.

hoverTargets
 .on('mouseover', showTooltip(true))
 .on('mousemove', positionTooltip)
 .on('mouseout', closeTooltip);

D3 doesn’t currently have the ability to trigger a mouse event so in order to test the behaviour I have had to roll my own very simple helper to invoke these events.

$.fn.triggerSVGEvent = function(eventName) {
 var event = document.createEvent('SVGEvents');
 event.initEvent(eventName,true,true);
 this[0].dispatchEvent(event);
 return $(this);
};

This is implemented as jQuery plugin that directly invokes the event as if it had come from the browser.

You can use it as below:

$point
 .triggerSVGEvent('mouseover')
 .triggerSVGEvent('mousemove');

It will probably change over time as I need to do more with it but for now this works as a way to test my tooltip behaviour.

 

Custom Operation Names with Swashbuckle 5.0

This is a post about Swashbuckle –  a .NET library that seamlessly adds Swagger support to WebAPI projects.  If you aren’t familiar with Swashbuckle then stop reading right now and go look into it – it’s awesome.

library-api

Swashbuckle has recently released version 5.0 which includes (among other things) a ridiculous array of ways to customise your generated swagger spec.

One such customisation point allows you to change the operationId (and other properties) manually against each operation once the auto-generator has done it’s thing.

Why Bother?

Good question.  For me, I decided to bother for one very specific reason: swagger-js.  This library can auto-generate a nice accessor object based on any valid swagger specification with almost no effort, whilst doing lots of useful things like handling authorization and parsing responses.

swagger-js uses the operationId property for method names and the default ones coming out of Swashbuckle weren’t really clear or consistent enough.

Injecting an Operation Filter

The means for customising operations lies with the IOperationFilter interface exposed by Swashbuckle.

public interface IOperationFilter
{
  void Apply(Operation operation, 
    SchemaRegistry schemaRegistry, 
    ApiDescription apiDescription);
}

When implemented and plugged-in (see below), the Apply method will be called for each operation located by Swashbuckle and allows you to mess around with its properties.  We have a very specific task in mind so we can create a SwaggerOperationNameFilter class for our purpose:

public class SwaggerOperationNameFilter : IOperationFilter
{
  public void Apply(Operation operation, SchemaRegistry schemaRegistry, ApiDescription apiDescription)
  {
    operation.operationId = "???";
  }
}

When you installed the Swashbuckle nuget package it will have created a SwaggerConfig file in your App_Start folder.  In this file you will likely have a long and well-commented explanation of all available configuration points, but to keep things simple we can insert the reference to our filter at the end:

GlobalConfiguration.Configuration
  .EnableSwagger(c =>
  {
    //...
    c.OperationFilter<SwaggerOperationNameFilter>();
  });

Getting the Name

At this point you have a lot of flexibility in how you generate the name for the operation.  The parameters passed in to the Apply method give you access to a lot of contextual information but in my case I wanted to manually specify the name of each operation using a custom attribute.

The custom attribute itself contains a single OperationId property…

[AttributeUsage(AttributeTargets.Method)]
public sealed class SwaggerOperationAttribute : Attribute
{
  public SwaggerOperationAttribute(string operationId)
  {
    this.OperationId = operationId;
  }

  public string OperationId { get; private set; }
}

…and can be dropped onto any action method as required…

[SwaggerOperation("myCustomName")]
public async Task<HttpResponseMessage> MyAction()
{
  //…
}

Once the attributes are in place we can pull the name from our filter using the ActionDescriptor

operation.operationId = apiDescription.ActionDescriptor
  .GetCustomAttributes<SwaggerOperationAttribute>()
  .Select(a => a.OperationId)
  .FirstOrDefault();

Voila!

3 Ways to Deal with SFOUC in KnockoutJS

What is SFOUC?

A Sudden Flash Of Unstyled Content, or SFOUC, refers to that irritating few milliseconds between when your web page loads and when all of your dynamic content pops into place.

The reason for this annoying phenomenon is that your HTML is being rendered a split second before your CSS & JavaScript files have finished downloading and running.  The un-augmented HTML sits there just long enough to catch the eye of your user, and then is whisked away as soon as all the lovely dynamic content is ready to replace it.

I’ve found that this is particularly evident when working with KnockoutJS because generally the last line of your startup code is ko.applyBindings(viewModel), and until you bind the view model you always see the unstyled content.

So how can we stop this menace?

Option 1: The “Classic” Approach

KnockoutJS is just a library on top of JavaScript so there is no reason why we can’t use the same approach as is suggested for non-Knockout apps.

There are a few flavours to this, but as a general solution:

  1. Add a class to the body element (e.g. no-js)
  2. Add a class to the dynamic content (e.g. dynamic-content)
  3. Add a style that hides dynamic-content within no-js

    .no-js .dynamic-content { display: none; }
    
  4. Add some JavaScript to remove the no-js class from the body once everything is loaded

    document.body.className = 
        document.body.className.replace("no-js", "");
    

This will do the job quite nicely, but still relies on the CSS loading quickly enough to hide the dynamic content before anyone notices.

Option 2: The “Visible” Approach

If the idea is to reduce the amount of time the basic HTML is visible then the fastest place to put the “don’t show this” instruction is in the HTML itself – using a style="display:none" tag.

The problem with this approach is that you then have to to go through and remove all of those styles once your JavaScript is ready to run; this is where the visible binding can help us out.

Under the covers, the visible binding simply sets the display style to none or to "" (i.e. not set) so we can use this to automatically remove the style tags created above:

<div style="display:none" data-bind="visible: true"> </div>
<!-- or -->
<div style="display:none" data-bind="visible: $data"> </div>

The first of these examples will display the div as soon as Knockout is bound; the second will display as soon as the current context has a truthy value.

Option 3: The “Make-Everything-A-Template” Approach

Instead of trying to hide rendered HTML content until we’re ready, what if we just stopped rendering it all together?  If we put all of our dynamic content inside script tags (which obviously won’t be rendered by the browser) then we can use the template binding to render the content from inside the script tags in the context of a view model.

<div data-bind="template: 'dynamic-content'">
    <!-- will appear empty on page load -->
</div>

<script id="dynamic-content" type="text/html">
    <h1>This will be rendered by Knockout</h1>
</script>

This makes for slightly more verbose markup than the other approaches but we can improve on this by switching to the containerless syntax:

<!-- ko template: "dynamic-content" --><!-- /ko -->
<script id="dynamic-content" type="text/html">
    <h1>This will be rendered by Knockout</h1>
</script>

A Note about Progressive Enhancement

Obviously if you want to employ progressive enhancement then these approaches will not work for you – they work by hiding the content until it is ready to be rendered.  If you are using progressive enhancement then I’m going to assume that the original HTML renders nicely anyway and that you don’t care about the SFOUC problem!

Protecting your CouchDB Views

If you work with a SQL or other RDBMS database you most likely have your schema backed up somewhere under source control.  Maybe it’s a bunch of SQL scripts, maybe it’s the classes from which you generated your Entity Framework schema, but you almost certainly have some way of restoring your DB schema into a new database (at least I hope that you do!).

couchdb

But what about CouchDB?

CouchDB, as anyone who has read the first sentence of a beginners guide will know, is a Non-Relational Database and so it does not have a schema.  All of the data is stored as arbitrary JSON documents which can (and do) contain data in a wide range of formats.

The problem is that whilst there is no schema to “restore” into a new database, there is another very important construct: views.

CouchDB Views

Views within CouchDB define how you query the data.  Sure, you can always fall back to basic ID-lookup to retrieve documents, but as soon as you want to do any form of complicated (i.e. useful) querying then you will mostly likely need to create a view.

Each view comprises 2 JavaScript functions: a map function and an optional reduce function.  I don’t want to go into a lot of detail on the map-reduce algorithm or how CouchDB views work under the covers (there are plenty of other resources out there) but the important thing here is that you have to write some code that will play a very significant role in how your application behaves and that should be in source control somewhere!

Storing Views in Source Control

In order to put our view code under source control we first need to get it into a format that can be saved to disk.  In CouchDB, views are stored in design documents and the design documents are stored as JSON, so we can get a serialized copy of the view definitions by just GETting the design document from couch:

curl http://localhost:5984/databaseName/_design/designDocumentName

Pass the output through pretty-print and you will see the contents of the design document in a JSON structure:

{
   "_id": "_design/designDocumentName",
   "_rev": "1-47b20721ccd032b984d3d46f61fa94a8",
   "views": {
       "viewName": {
           "map": "function (doc) {\r\n\t\t\t\tif (doc.value1 === 1) {\r\n\t\t\t\t\temit(\"one\", null);\r\n\t\t\t\t} else {\r\n\t\t\t\t\temit(\"other\", {\r\n\t\t\t\t\t\tother: doc.value1\r\n\t\t\t\t\t});\r\n\t\t\t\t}\r\n\t\t\t}"
        }
   },
   "filters": {
       "filterName": "function () {}"
   }
}

This is, at least, a serialized representation of the source for our view, and there are definitely some advantages to using this approach.  On the other hand, there are quite a few things wrong with using this structure in source control:

Unnecessary Data
The purpose of this exercise is to make sure that the view code is safely recoverable; whilst there is debatably some use in storing the ID, the revision (_rev) field refers to the state of the database and may vary between installations and shouldn’t be needed.

Functions as Strings
The biggest problem with this approach is that the map, reduce and filter functions are stored as strings.  You may be able to put up with this in simple examples, but as soon as they contain any complexity (or indentation, as seen above) they become completely unreadable.  Tabs and newlines are all concatenated into one huge several-hundred-character string, all stored on one line.  Whilst this is not a technical issue (you could still use these to restore the views) it makes any kind of change tracking impossible to understand – every change is on the same line!

As well as the readability issues we also lose the ability to perform any kind of analysis on the view code.  Whether that is static analysis (such as jsLint), unit testing or some-other-thing, we cannot run any of them against a string.

An Alternative Format

Instead of taking a dump of the design documents directly from CouchDB, I would recommend using an alternative format geared towards readability and testability.  You could be pretty creative in exactly how you wanted to lay this out (one file per design document, one file per view…) but I have found that the structure below seems to work quite well:

exports.designDocumentName = {
	views: {
		viewName: {
			map: function (doc) {
				//some obviously-fake logic for demo purposes
				if (doc.value1 === 1) {
					emit("one", null);
				} else {
					emit("other", {
						other: doc.value1
					});
				}
			}
		}
	},
	filters: {
		filterName: function () { }
	}
};

exports.secondDesignDocument = {
	//...
};

This has several advantages over the original format:

  • It is much easier to read!  You get syntax highlighting, proper indentation and the other wonderful features of your favourite code editor
  • There is no redundant information stored
  • jsLint/jsHint can easily be configured to validate the functions
  • By using the AMD exports object, the code is available to unit tests and other utilities (more on that below)

There is one significant disadvantage though: because I have pulled this structure out of thin air, CouchDB has no way of understanding it.  This means that whilst my view code is safe and sound under source control I have no way of restoring it.  At least with the original document-dump approach I could manually copy/paste the contents of each design document into the database!

So how can we deal with that?

Restoring Views

As I mentioned above, one of the advantages of attaching design documents as objects to the AMD exports object is that they can be exposed to node utilities very easily.  To demonstrate this I have created a simple tool that is able to create or update design documents from a file such as the one above in a single command: view-builder.

You can see the source for the command on GitHub or you can install it using NPM.

npm install -g view-builder

After installation you can run the tool like this:

view-builder --url http://localhost:5984/databasename  --defs ./view-definitions.js

This will go through the definitions and for each of the design documents…

  1. Download the latest version of the design document from the server
  2. Create a new design document if none already exists
  3. Compare each view and filter to identify any changes
  4. If changes are present, update the version on the server

The comparison is an important step in this workflow – updating a design document will cause CouchDB to rebuild all of the views within it; if you have a lot of data then this can be a very slow process!

Now we have a human-readable design document definition that can be source-controlled, unit tested and then automatically applied to any database to which we have access.  Not bad…

Other Approaches

Whilst this system works for me, I can’t imagine that I am the first person to have considered this problem.  How is everyone else protecting their views?  Any suggestions or improvements in the comments are always welcome!