When writing code which needs to be built before it can be used, whether that’s a transpile step like ES7 JavaScript to ES5, or a compile step like with Go, you’re likely going to want to do this when a file in your application tree is modified. There are a large number of project and language specific tools which were developed to tackle this problem but did you know that there are system level packages available that you can use across all your projects?

Introducing inotifywait, which is an efficient and easy to use cli tool which uses Linux’s inotify interface to watch for changes to the file system. And fswatch for those on OSX. Most of the language and project specific tools are built using wrappers around these two tools.

If you’re like me and don’t like to add more build tools and layers of abstraction than necessary then you’re probably already using Make to build and develop your application, and you’ll be happy to know that using these tools with it is trivial. Make has no way to know when a file has changed until the next time you run make so because of this many have tried something like the following which will run the build target every 2s.

watch -n 2 make build

Or they will build the loop into the makefile.

.PHONY watch
watch:
  while true; do \
    make build --silent; \
    sleep 1; \
  done

This works, but it’s performing a lot of unneccesary work being run in a loop especially since the file system is able to tell us when a file or directory has been modified using inotify. Instead of automatically looping we wait for a file system event that we’re interested in and then run our build target. In the following code we’re able to create a make target in our makefile which will watch for file changes under our specified directory recursively.

.PHONY watch
watch:
  while true; do \
    inotifywait -qr -e modify -e create -e delete -e move app/src; \
    make build; \
  done

This works by creating an infinite loop which is started when you run the watch target. Before the first loop can finish it hits inotifywait which sets up listeners on all of the files and directories in your app/src directory. These listeners are waiting for any files to be modified, created, deleted, or moved in or out of app/src/…. When a file or directory changes inotifywait lets the loop continue triggering the call to your build target. That target executes in its entirety building only the file(s) which changed (assuming you’ve properly set up your makefile) and then the loop starts again with inotifywait waiting for those files to change again.

Using this technique will allow you to create an easy and efficient file change watcher for your makefile without too many additional tools. If you have any questions or comments leave them in the comments below or mention me on twitter @fromanegg. Thanks for reading!

Juju works great for software development involving simple environments and is amazing for complex environments. A recent question on Ask Ubuntu “Is Juju a suitable tool for development as well as deployment?” made me realize that we use Juju for development every day but there really isn’t much documentation on the subject.

For the rest of this post I’m going to assume that you are already familiar with the concept of Juju and what problems it solves on the deployment side of things. If you aren’t, I recommend reading an earlier post of mine “Juju - Explain it to me like I’m 5”.

One of the biggest problems when developing any kind of software is getting the dependencies up and running in a way which matches the production environment close enough to be sure that you aren’t going to run into “this environment only” bugs. Sure you can install mysql onto your local machine and run the database dump on that install, but you also have to make sure that you apply all of the same configuration, indexes, build flags, etc. as the production environment.

Even once you get it up and running you then need the ability to update it after modifications were made by someone else on the project all with high production parity, and without extraneous downtime.

To illustrate the benefits of using Juju for development I’m going to use a fictitious photo and video sharing website. A website like this would require multiple services, load balancer, web server, database, blob store, user authentication, photo processor, video processor.

Keeping in mind that a Juju Charm can be written using any programming language or DSL that can be executed on the host machine. This means it can use Puppet, Chef, Python, JavaScript, Docker, and pretty much anything else you would like to use. Juju provides distinct advantages for project development depending on the lifecycle of the project. For our photo video site let’s first assume that we’re just starting the project and then later on we’ll assume that the project is mature, released, and still under active development.

Just starting out

Typically when a project starts you’re only going to need a couple services, the database and your webserver. Let’s start our environment and install the database and webserver on our local machine.

juju bootstrap local
juju deploy apache2
juju deploy mongodb

Great, It’s 5 minutes in and we now have apache2 and mongodb running on our machine in separate LXC’s, we can now start developing our website and pointing it to these services.

Parallel to this, your teammate is working on the user authentication service, it’s going well and they want someone to help them test it in the application environment. So lets get that service that they have been working on.

mkdir -p ~/charms/trusty && cd ~/charms/trusty
git clone --depth 1 [email protected]:photovideo/authenticator
juju deploy --repository=. local:trusty/authenticator

For more information on deploying local charms see This Post on Ask Ubuntu.

A few minutes later and you have an identical copy to their user authenticator service, you can point your website to it and give it a try. A little later the authenticator service has been updated and you’d like to run it again.

cd ~/charms/trusty/authenticator
git pull
juju upgrade-charm --repository=.

This process repeats itself throughout each service and across each member of your team. Allowing each one to update their dependencies within minutes to identical representations of how it’ll be run in production.

Released project

Now that your project has been released, deployed using Juju, running in production, you’ve had a chance to take advantage of Juju’s deployment and scaling features but how does Juju help you develop now?

In some ways it’s even easier to deploy. In this case, I’m going to assume that your services are private and not stored in the Juju Charm Store. If they were in there you wouldn’t have to first clone the repositories.

mkdir -p ~/charms/trusty && cd ~/charms/trusty
git clone --depth 1 [email protected]:photovideo/mongodb
git clone --depth 1 [email protected]:photovideo/authenticator
…

juju-quickstart -e local photovideo.yaml

Taking advantage of Juju Quickstart and the Juju bundles functionality you can deploy your entire environment with identical services, configuration, and machine placements. This will open up the GUI which will allow you to modify the machine placement of any of those services and change configuration values before deploying to your machine. Once you hit commit, sit back and wait for it to deploy an identical environment to your production environment on your local machine.

Now you can work on the specific service you’re interested in within an identical environment to everyone on your team. And when a service gets updated by another member on your team it’s trivial to update.

cd ~/charms/trusty/authenticator
git pull
juju upgrade-charm --repository=.

I hope this gets you excited to use Juju for development as well as deployment. My team uses Juju for development this way and has for over a year. It allows us to be more productive because we don’t have to waste time installing and updating services the hard way.

I’ll be creating a follow-up to this post with real code examples and workflows for doing the actual development of these services, stay tuned! Thanks for reading, if you have any questions or comments file them below or you can hit me up on twitter @fromanegg.

Over my career I’ve written in a number of different programming languages, most of them dynamic. It’s been about 10 years since I last wrote a project from start to finish in a typed language, C++, but recently I’ve been working with Go. It’s safe to say I had become blissfully ignorant of the benefits and challenges of a typed language and in working with Go I found myself really enjoying the explicit declaration of types with regards to stability and code legibility.

In many JavaScript projects you’ll see something like this:

function dataFiller() {
  var myObject = {};
  // 5 lines later
  myObject.Foo = 'bar';
  // 10 lines later
  myObject.Baz = 'bax';
  // 5 lines later
  return myObject;
}

This essentially means that you must read through an entire functions execution, and sometimes an entire modules execution to see what the structure of that object will become.

Now lets compare that to Go:

type myObject struct{
  Foo string
  Baz string
}

func dataFiller() *myObject {
  var data = &myObject{}
  // 5 lines later
  data.Foo = "bar"
  // 10 lines later
  data.Baz = "bax"
  // 5 lines later
  return data
}

Here you don’t even have to read further than the function declaration to know what the function will return and then you simply have to reference that type in the file to know ahead of time what it’s structure will be.

Throughout my time as a developer I’ve noticed that it’s quite rare that you cannot predict with 100% certainty what the data structure of your variables will be but, in dynamic languages, we don’t ever seem to outline that structure for people reading and writing the code. So this got me thinking about how this workflow can be adopted in JavaScript to give us the benefits of types using the native language constructs without having to use a compile target like Typescript. In practice it turns out to be quite simple:

// Method to convert defined object literal 'type'
// into a 'locked down' data store.
function createObject(obj) {
  var stub = Object.create(null);
  var keys = Object.keys(obj).forEach(function(key) {    
    Object.defineProperty(stub, key, {
      configurable: true,
      enumerable: true,
      writable: true,
      value: obj[key]
    });
  });
  Object.seal(stub);
  return stub;
}

// Your 'type' which will be used to create
// usable data store instances.
var myObject = {
  Foo: '',
  Baz: ''
};

// Fills the object with data.
// @method dataFiller
// @return {myObject}
function dataFiller() {
  var data = createObject(myObject);
  // Set values like normal.
  data.Foo = 'bar';
  data.Baz = 'bax';
  return data;
}

var fullData = dataFiller();
fullData.Foo = "can Update"; // Updated
fullData.Qux = "Can't add new properties"; // Not added

Following this pattern will allow you to write JavaScript with predefined data type which helps tremendously in readability with minimal amount of additional work. This is just a basic example to show how the typed structure could be applied to JavaScript, the createObject() method could be expanded on to add getters and setters which could enforce the property types and you could even expand this idea to use Go like interfaces following a similar structure. I feel the trivial trade-off in additional lines of code is well worth the structure which is now being enforced. What do you think? Have you written a large JavaScript application before where predefined data structures helped? Let me know in the comments below or on twitter @fromanegg. Thanks for reading!

When using ‘use strict;’ in your scripts you’ll find that you are no longer allowed to overwrite native methods like FileReader() so how do you test that these methods are being called with the appropriate parameters? Lets start with a typical function call involving FileReader() and then modify it to make it easier to test.

function importFile(file) {
  var reader = new FileReader();
  reader.onload = function(e) {
    processData(e.target.result);
  };
  reader.readAsText(file);
}

Pre ‘use strict’; days you could simply stub out the global FileReader() but since that’s no longer an option we need to get a little creative with our code structure. First thing we’re going to do is create a FileReader instance generator function.

function importFile(file) {
  var reader = generateFileReader();
  reader.onload = function(e) {
    processData(e.target.result);
  };
  reader.readAsText(file);
}

function generateFileReader() {
  return new FileReader();
}

Then we’ll move the onload callback to a named function.

function importFile(file) {
  var reader = generateFileReader();
  reader.onload = _readerOnloadHandler;
  reader.readAsText(file);
}

function generateFileReader() {
  return new FileReader();
}

function _readerOnloadHandler(e) {
  processData(e.target.result);
}

Now you can test the importFile function and its parts by stubbing out the generateFileReader function to return a basic reader stub and not have to worry about the native method. In the following example I’m using two simple stubbing methods to generate stub functions and methods.

it('parses files', function() {
    // Set up the stubs.
    var processStub = stubMethod('processData');
    // The second parameter of the stubMethod is what generateFileReader
    // will return when it's called.
    var reader = stubMethod('generateFileReader', {
      onload: null,
      readAsText: stubFunction();
    });
    // Call the public method.
    importFile('/path/to/file');
    // Make assertions
    assert.equal(reader.calledOnce(), true);
    assert.equal(reader.readAsText.calledOnce(), true);
    assert.equal(reader.readAsText.lastArguments()[0], '/path/to/file');
    // Call the callback.
    reader.onload({ target: { result: 'file data' }});
    // Make assertions
    assert.equal(processStub.calledOnce(), true);
    assert.equal(processStub.lastArguments()[0], 'file data');
  });

Splitting up the code in this way makes unit testing possible because you are essentially wrapping the native function call in a function which are you are able to stub out. Happy testing!

It’s common when moving from one version of your application to another that you will want to maintain all of the SEO cred you have built up while simultaneously moving to a new url syntax. To do this people usually reach for mod_rewrite with apache or nginx for which there is quite a bit of documentation on this topic. Unfortunately the same can’t be said for rewriting and 301 redirecting when using HAProxy.

I have a rather common use case. I plan on moving this blog from Tumblr to Ghost using the Ghost Juju charm and the HAProxy charm to handle load balancing, reverse proxy and rewriting and redirecting the old Tumblr style urls to the Ghost url format.

Using mod_rewrite you would likely write something similar to the following to handle rewriting the url to the new syntax and redirecting with a 301 response code:

RewriteEngine On  
RewriteRule ^/post/\d+/(.+)/? http://example.com/$1  [R=301,L]

HAProxy however doesn’t have a single rule for rewrite and redirect instead we have to combine reqrep, to rewrite the url, and redirect, to handle the actual redirection.

Assume the following front and backend configurations:

frontend haproxy-0-80
    bind 0.0.0.0:80
    default_backend haproxy_service

backend haproxy_service
    balance leastconn
    cookie SRVNAME insert
    server ghost-0-2368 10.0.3.220:2368 maxconn 100 cookie S0 check

In order to rewrite the url we first need to add the rewrite into the frontend:

frontend haproxy-0-80
    bind 0.0.0.0:80
    default_backend haproxy_service
    reqrep ^([^\ :]*)\ /post/\d+/(.+)/?     \1\ /\2

This will rewrite the old Tumblr style url format to the new Ghost style url format and pass that url off to the Ghost webserver. If you’re ok with the user still seeing and using the old url style then you can stop here. Both the real Ghost url format and the old Tumblr style url format will work. If however you want to tell the users and any search engines that the old url is no longer valid and to use a new one instead we need to add the redirect rule:

frontend haproxy-0-80
    bind 0.0.0.0:80
    default_backend haproxy_service
    reqrep ^([^\ :]*)\ /post/\d+/(.+)/?     \1\ /\2
    redirect prefix / code 301

The HAProxy redirect syntax requires us to specify what kind of redirect we want to occur. The options are ‘location’, ‘prefix’, and ‘scheme’ none of these truly fit redirecting an old to new url. Fortunately we can trick HAProxy into doing just what we want by telling it we want to redirect to change the prefix of the url and passing / as the url to prefix along with the code we want to send, 301.

We aren’t quite done. If we leave this as is it will redirect every rule, including the Ghost url formatted ones which will put it into a redirect loop. In order to fix this we need to create an access control list to only redirect the old urls:

frontend haproxy-0-80
    bind 0.0.0.0:80
    default_backend haproxy_service
    acl old_url path_beg /post
    reqrep ^([^\ :]*)\ /post/\d+/(.+)/?     \1\ /\2
    redirect prefix / code 301 if old_url

To the frontend we added a new acl rule called “old_url” which returns true if the path begins with /post. We then add the conditional ‘if old_url” to the redirect rule and we’re done. After restarting the HAProxy service you’ll be able to use the old url structure and be 301 redirected to the new Ghost syntax urls which also remain functional.

In trying to resolve this issue I spent many hours reading the HAProxy documentation, reading blog posts and testing. I Even created a serverfault question which I have now updated with the solution so I hope that this post will save others a bunch of time. As always if you have any questions or comments please comment below or mention me on Twitter @fromanegg.