Juju is brilliant. Ok I am a little biased being that I work at Canonical on the Juju project, but every week I’m more and more impressed with how awesome Juju is and how easy it makes developing software and working in the cloud. I tweet and post a bunch about Juju but today I was asked to explain what Juju is to someone like they are 5.

Juju is often described as apt-get for the cloud, but what does someone who isn’t familiar with the Ubuntu ecosystem know about apt-get? I think I’ll need to go even more abstract…

Let’s say that you had built the most awesome Lego race car body (kids still play with Lego right?) but you didn’t know how to make the wheels or make it move with one of those Mindstorm engines. So now you have to go and play around for a long time to learn how to make a wheel and how to hook up engines. But this is going to take a long time and your mom is going to call you for supper soon, there has to be someone who is an expert wheel maker and Mindstorm engine builder right? Wouldn’t it be awesome if they could build wheels and engines you could use in your race car so you can finish it before supper?

Well that is what Juju does. It allows people who have expertise in a specific field to build packages that you can connect to your own projects without needing to be an expert in that field. So how does this help you write software faster in the cloud? Well I think that’s best explained with another, more grown up, example.

Recently, I wrote a Juju Charm for the Ghost blogging platform so that I can move this blog off of Tumblr and onto something a little more customize-able. The problem? I needed a front end server which was capable of load balancing the webservers when the load picks up and I don’t have the time to learn all about the various options and the best way to install and configure them. So I went to what’s known as the Juju Charm Browser and picked the haproxy Charm and added it into my environment. With multiple web servers I could no longer rely on Ghost’s built in SQLite implementation so I needed to hook up to an external MySQL database. Back to the Charm Browser I went and grabbed the MySQL Charm.

So now I have a load-balanced horizontally scale-able Ghost blog (coming soon). You can have one too, it’s incredibly easy too. For you to get your very own horizontally scale-able load-balanced Ghost blog all you have to do is execute these commands:

juju deploy ghost
juju deploy haproxy
juju deploy mysql
juju add-relation ghost haproxy
juju add-relation ghost mysql

Lets pretend for a moment that haproxy isn’t cutting it any longer and you instead want to use apache2:

juju destroy-service haproxy
juju deploy apache2
juju add-relation ghost apache2

Maybe your blog is super popular and you need another 5 webservers:

juju add-unit ghost -n 5

That’s it. You have now taken advantage of many peoples domain expertise to develop a cloud environment for your own blog.

So what if you wanted to use MySQL or any of these other Charms for a different application? That’s the best part these charms are written using the best practices for the particular charm but have easy to interface with hooks. To enable the Ghost charm to communicate with the haproxy charm all I had to write was:

#!/usr/bin/node
var exec = require('child_process').exec;
var port, address;

function storePort(err, returnedPort) {
  port = returnedPort;
  exec('unit-get --format=json private-address', storeAddress);
}

function storeAddress(err, returnedAddress) {
  address = returnedAddress
  exec('relation-set port=' + port + ' hostname=' + address);
}

exec('config-get --format=json port', storePort);

Juju charms can be written in anything that can be executed. The Ghost charm was written in JavaScript, the MySQL one in bash. Others use Python, Puppet, Chef, Ansible, even Docker containers can be orchestrated using a Juju Charm.

Want to run your own wiki:

juju deploy mediawiki
juju deploy mysql
juju deploy haproxy
juju add-relation mediawiki mysql
juju add-relation mediawiki haproxy

How about a MongoDB cluster, Hadoop cluster, Django app, a video transcoding cluster, and more including your own applications. All easily deployable and scalable across public clouds like EC2, HP Cloud, Joyent, Your private OpenStack cloud, and even your very own local machine. That’s right, the above commands all work to deploy identical set ups to all of these targets and more.

This just scratched the surface of the power of Juju but I hope that this glimpse has made you interested enough to go do some exploring of your own. You can find the documentation to get started with Juju here. And as always if you have any questions or comments you can comment below, find me on Twitter @fromanegg on G+ +Jeff Pihach or hop into #juju on irc.freenode.net and ask away.

Last week we released a new version of the Juju GUI which brings with it one major UI change plus a huge refactoring of the application state system.

If you have any questions about Juju or the Juju GUI, you can read the official documentation for Juju and join us on freenode.net in #juju and #juju-gui

Since Facebook released Flux there has been a lot of chatter about the unidirectional data flow architecture and how it helps large scale applications be easier to reason about and develop. With the Juju GUI nearing 70,000 lines of code in the core application we were running into an issue where it was becoming difficult to maintain the correct state throughout the many rendered UI components which are being consistently changed from the user interactions and changes coming in over the websocket from the users Juju environment.

In an effort to remedy this we determined that the only way to solve our current issues and prevent new ones going forward is to develop a unidirectional data flow architecture vs our MVC event style system which was currently in place. The execution flow we decided on was as follows:

  1. When a user visits a url or a delta comes in over the websocket it is parsed and split into disparate sections of state for each component that’s involved.
  2. That state is then saved into the state system.
  3. When the state system changes it diffs from it’s previous state and passes the diff off to the dispatcher.
  4. The dispatcher scans through the diff and passes off the various state components to their registered handlers.
  5. Those handlers then pass the updated data into the UI components.
  6. The UI components are then responsible for updating their DOM representation.
  7. If the user makes a change to the UI, that UI component requests a change from the state system and the cycle repeats. You’ll notice that this is strikingly similar to the Flux architecture:

It’s great to see cases of the multiple discovery hypothesis in action. It’s a sign that you’re on the right track for solving the bigger picture problems. It’s also nice to see someone formalizing this architecture for client side applications in the hopes that others will be able to skip these scalability problems. While our implementation differs from Flux, the architecture is nearly identical. I highly recommend this architecture to anyone writing a large complex application of any kind, client or server side, as it dramatically reduces the complexity of the applications execution.

Do you work on a large application? Do you think this architecture could help simplify your app? Let me know in the comments below, @fromanegg, or +Jeff Pihach Thanks for reading!

Anyone who is familiar with package versioning has used, or at the very least heard of, Semantic Versioning. For the uninitiated, semver is a three part version number in the format MAJOR.MINOR.PATCH ex) 1.13.2, and you can find very in-depth details on the semver website.

The semver website outlines the rules for incrementing version numbers as:

Your API typically returns a key/value pair for a requested field:

{ foo: bar }

But now you decide to change the value returned by capitalizing the first character in the returned strings:

{ foo: Bar }

What portion of the version should be updated with this change?

Jeff’s semver incrementing version number rules:

I inverted the rule set so it can be read that any later rule overwrites the previous rule so the version section moves up a level.

What do you think of this approach? Do you find it any easier to understand? Do you have any other rules to add? Or are you already using semver and have a different approach to deciding when to increase which number? Comment below or mention me on twitter @fromanegg and let me know. Thanks for reading!

The official release date of Ubuntu 14.04 LTS Trusty Tahr is coming up in just over a week from the time of writing. With that, many of you are going to want to install the next version of the best Linux operating system on your computers, and if you want to install on metal along side OSX and have a Apple Mac or Macbook this is the guide for you!

You’re going to need a few things before we get started:

Obligatory Warning

All of this information is provided without warranty of any kind. Always make and keep proper backups of your data.

Step 1

BACKUP YOUR COMPUTER

While it’s unlikely that an issue will occur which wipes the data on your disk it’s always highly advisable to have a couple quality backups just in case.

Go make another backup…I’ll wait.

DID YOU BACK UP YOUR DATA YET?

Step 2

Now that you have a quality backup safely tucked away you will need to install rEFInd. Open the terminal and navigate to the location that you extracted the zip file and then follow the installation instructions for OSX. Once completed, restart your computer to confirm that rEFInd was installed correctly. On rebooting, the rEFInd boot loader should load up and you should select the Apple logo to get back into OSX. (sorry for the sub par photos)

Step 3

Now that you’re back in OSX we need to take that iso of Ubuntu 14.04 that you downloaded and make a bootable USB stick. Follow the 10 steps outlined here to create the bootable stick. You’ll know when it’s ready because OSX will pop open a dialogue saying that it cannot read the device.

Step 4

You need a place to put Ubuntu on your computer, so you’ll need to create a partition on your hard drive. The size of this partition will depend on the size of your disk and what you plan to do in Ubuntu, but it should be at least 10GB to give you some wiggle room (mine is 100GB). To do so you will use the OSX tool ‘Disk Utility’. There are some dated, but still accurate instructions on creating this partition here.

Step 5

With the partition made it’s now time to stick the USB stick into your computer and reboot. After rebooting you should land on the rEFInd boot loader again with a few more options than before. If you do not see these options then reboot again holding the “option” key.

Your options may look a little different but you want to pick one of the options which are provided from the USB stick (There are three provided in this image). For the Haswell equipped machines you want to pick the option which reads something along the lines of “Boot EFI\boot\grubx64.efi …” and hit enter. This will start another boot loader with the first option “Install Ubuntu”, hit enter to select this option. After a little while you should be in the Ubuntu installer, follow through the steps until the installer asks you where you would like to install Ubuntu.

Note: If after progressing through the install process and you find that it boots into a black screen start again from Step 5 but choose an installer without “EFI” in the path name.

Step 6

Note: partitioning will be a little different for everyone so if you get confused hop onto IRC in #ubuntu on freenode.net or create a question on http://askubuntu.com for some help.

When you step through the installer you will get to a pane which asks you where you want to install Ubuntu to. The options should be pretty self explanatory. But if there isn’t an option to install Ubuntu into your new partition you will need to take the manual route. Select the manual partition option and you should be shown a screen which looks like:

As you can see form this list, I created a 100GB partition originally and then created a 3GB swap partition. To take the partition you created in OSX and create a swap from it you will need to select it then hit the “Change” button and downsize it 3GB. This will leave you with “free space” of 3GB. Click the “+” button and create a new partition of the type “swap”. You’ll want to make sure your primary partition is of type “ext4” and that the mount point is “/”.

Continue on from here installing Ubuntu to your new partition.

Step 7

After the installation has completed remove the USB stick from your computer and reboot. Now when rEFInd shows up you should have an extra option with the Ubuntu logo. Click it and boot into Ubuntu.

Step 8

Once you have logged into Ubuntu, click the ‘System Settings’ icon in the launcher bar on the left. Select ‘Software and Updates’ and then select the ‘Additional Drivers’ tab. After this tab loads you should see a proprietary driver for the Broadcom wireless, select it and click ‘Apply’. Reboot the computer and you should now be able to click the network icon in the top right of the desktop to connect to a wireless connection.

Congratulations you now have Ubuntu installed along side OSX!

If you run into any problems along the way or have any questions your first stop should be your search engine of choice there are thousands of great resources for Ubuntu scattered around on many topics. You will also find a ton of great people in Ubuntu community hanging out in the #ubuntu room on IRC on freenode.net or if you prefer to ask direct detailed questions check out http://askubuntu.com. I hope that this guide has helped you get up and running with Ubuntu, please comment below with any questions or comments or I can be found on twitter @fromanegg, thanks for reading!