A significant portion of my day to day activities are based around code reviews and helping debug issues with other developers. When doing this I will usually open up all of the files that have changed between the current branch and master so that I can get an understanding of the problem as a whole. After a decade of doing this I finally got sick of doing it manually…There must be a better way! I wanted to check out a branch and open all of the changed files in vscode automatically. After a bit of poking around I found the git diff flag --name-only which returns the list of changed files. The command ended up like the following:

code -n . `git --no-pager diff --name-only master`

Command breakdown

code -n .

Opens up a new window of vscode with the current project folder open in the ‘explorer’ section.

git --no-pager

disables the pager that git uses by default in zsh (you may not need this flag)

--name-only master

displays a list of files that have changed from master.

Now the next logical step is to make this a bash command and add some customization. Below I’ve defined the command as a function with the ability to define an argument for the branch name and another to filter the list of files. I’ve found the filter to be especially useful when trying to avoid things like test metadata and lock file updates.

# vscodedevsync [branch] [filter regex]
function vscodedevsync() {
  code -n . `git --no-pager diff --name-only ${1:-master} | grep -o "$2"`

Anyways, hope this helps add a bit of efficiency into your day.

Running and developing an Ubuntu based workload on Mac OS has never been easier. Canonical has released a new tool called Multipass which allows you to quickly spin up Ubuntu Server virtual machines on Ubuntu, Mac OS and Windows. The following instructions will get you an Ubuntu Server VM up and running, and the Ubuntu file system mounted to Mac OS so that you can work in the Mac OS UI using your regular development tools like VS Code.

Mac OS

You’ll first need to get Multipass installed by visiting the Multipass website and downloading and installing the package on your Mac OS host. Once installed, open the terminal on your Mac OS host and run the following to download and install the latest LTS release of Ubuntu Server.

multipass launch --name ubuntu
multipass shell ubuntu


After the VM has been started we need to connect to it to install the nfs server.

sudo apt install nfs-kernel-server -y

Now we need to create the folder that we’re going to work from in the home directory of our new Ubuntu VM and open up the permissions on it.

mkdir -p ~/code
sudo chmod -R 777 ~/code

This folder needs to be exported from the VM’s file system which is done by appending the following content to the /etc/exports file. If your VM has a different IP range than what is shown below you can simply update the command below to match your environment.

echo "/home/ubuntu/code,fsid=0,insecure,no_subtree_check,all_squash,async,anonuid=1000,anongid=1000)" | sudo tee -a /etc/exports

Then we have to export the folder and restart the NFS service.

sudo exportfs -a
sudo service nfs-kernel-server restart

Create a temporary file so you can see if your mount worked successfully later.

touch ~/code/test

Mac OS

In another terminal window on your Mac OS host we need to mount our VM’s code folder. Replace the string for the VM’s IP address and the UserName of the user on Mac OS.

mkdir -p ~/code
sudo mount -t nfs <VM IP>:/home/ubuntu/code /Users/<UserName>/code

To keep the drive mounted after refreshing and restarting the VM

echo "<VM IP>:/home/ubuntu/code /Users/<UserName>/code nfs resvport,rw,rsize=8192,wsize=8192,timeo=14,intr" | sudo tee -a /etc/fstab

Now you should be able to see the test file that you created previously in Ubuntu from Mac OS.

ls -al ~/code

From the terminal on your Mac OS host you can now open these folders like they live in Mac OS with your code editor of choice.

code ~/code

Tips & Tricks

On a day to day basis the most efficient way to work with these files is to perform your heavy IO interactions like git clones and builds from within your new Ubuntu VM. This can be done by leaving a terminal open SSH’d into it.

The services that Multipass use to create the VM on Mac OS allow you to over subscribe the VM’s resources so if you want to have the fastest VM possible you can give it all CPU cores and RAM as well as ample disk space. The following command will give the new Ubuntu VM 16 cores, 100GB of disk space and 16GB of ram while allowing the host and other Multipass VM’s to share the same resources.

multipass launch -c 16 -d 100G -m 16G --name ubuntu

At the time of writing you cannot resize the VM’s disk space so you’ll want to give it more than you think you’ll need.

Whenever you write any code that is to be consumed by another, whether it be a library or some UI element, that consumer expects it to work in a certain way every time they interact with it. All good developers would agree and that’s why we also write tests that either break our code up into chunks and test that each chunk works as expected, unit tests, or test the entire lifecycle, end to end tests.

Anyone who has written unit tests for long enough knows that they are tedious to keep in sync with refactors and often end up taking a disproportionate amount of time compared to the time it took to write the functional code. I propose that that we focus less on unit tests and replace them with what I’m calling the user contract of your code.

What is a user contract?

The consumer expects that when they perform action X, they receive outcome Y. Typically they are not concerned about how X became Y just that it does so reliably. This is what I’m calling the user contract. If we as the authors of the code take the same view from a testing perspective, it allows us to write simpler tests and gives us the ability to refactor how a library or UI component works without having to update our tests, dramatically speeding up refactoring.

While these examples are written in Javascript the same techniques apply to all languages.

Library example

Starting with a simple library that another developer may be using…

export async function fetchUserList() {
  const userList = await _queryDBForUserList();
  return await _formatUserList(userList);

function _queryDBForUserList() {
  // Fetch content from the database.

function _formatuserList() {
  // Reformat the data as returned from the database.

A consumer of this API would have a couple expectations:

These expectations then outline what your tests are:

describe('fetchUserList', () => {
  it('does not block');
  it('returns in the correct format');

You should note that we don’t test any method that wasn’t exported, nor do we export methods simply for testing purposes.

To aid the user in understanding what this contract is you can outline it in the docblock for the exported function. This way it can be used to generate the documentation for your library and help outline what your test structure is.

  Returns a formatted user list.
  @return {Object} The user list in the following format:
  { id: INT, name: STRING, favouriteColour: STRING }
export async function fetchUserList() {
  const userList = await _queryDBForUserList();
  return await _formatUserList();

We don’t explicitly test the _queryFBForUSerList and _formatUserList functions as they are implementation details. If you were to change the type of database returning the user list, or the algorithm being used to format the user list you should not have to also modify your tests as the contract to your users has not changed. They still expect that if they call fetchUserList they will receive the list in the specified format.

UI Example

Let’s take a look at a UI component this time using the React javascript library, in an effort to save space I’ve removed the functions that aren’t exported. This also helps to illustrate their irrelevance in our testing strategy.

export const LogIn = ({ children }) => {
  const userIsLoggedIn = useSelector(isLoggedIn);
  const userIsConnecting = useSelector(isConnecting);

  const button = _generateButton(userIsConnecting);

  if (!userIsLoggedIn) {
    return (
        <div className="login">
          <img className="login__logo" src={logo} alt="logo" />
  return children;

This is a fairly simple component that renders a login button with a logo. Let’s go through the exercise and see what our User Contract is:

Our tests would be:

describe('LogIn', () => {
  describe('the user is not logged in', () => {
    it('renders a logo and button to log in');
    it('renders any children passed to it');
    it('clicking the button logs in');
  describe('the user is logged in', () => {
    it('does not render a logo and button to log in');
    it('renders any children passed to it');

Testing the returned value in a UI component is a little more nuanced than checking a return value of a library function. We don’t necessarily want to check every specific detail of each element returned unless it’s part of the contract. I’ll expand these tests with assertions but eschew the component setup and rendering in interest of space.

it('renders a logo and button to log in', () => {
  expect(wrapper.find('.login button').length).toBe(1);
it('renders any children passed to it', () => {
  expect(wrapper.find('main .items').length).toBe(3);
it('logs the user in', () => {
  wrapper.find('.login button').simulate('click', {});

It’s important to note here that we have tried to limit the specific details that aren’t relevant to the contract of the component. This allows the design to change and the contract to remain valid and we do not need to update the tests. This is especially beneficial when you have a shared component library within your company. You can update the designs and implementation details of your components without updating the tests.

What if I…


When writing the code and exporting methods, ask yourself if the user needs to have access to this method or if you’re only doing it for testing purposes. You can always export more methods, you can’t always take exported methods away.

When writing tests ask yourself how can the consumer interact with your code and what type of outcome is expected for those interactions and them make sure you have those documented and assertions in your tests.

Don’t test implementation details of an exported method or UI component. Consider moving those to a different user contract if you feel they need direct testing.

More reading

I have written some content many years ago which you may also find helpful:

An advantage to components, React or otherwise, is that they can be used multiple times in various contexts across your application. As the application grows and components get modified over time the component call signatures can drift from their original spec and the many call locations across the application can miss getting updated. Compound that with shallow rendering unit tests and you’ve got yourself a problem where parts of your application don’t work as expected because of invalid data being sent to the components, not because the components are broken themselves.

This was an issue that we ran into a few times with the Juju GUI. The Juju GUI is a web app with a large portion of it rendered using a number of parent React components which then render the appropriate children based on the state of the application. When one of the children needs access to a method or data, it’s possible that it will need to be passed from the top application down through the parent and various children. This caused issues where it was easy to miss updating a components call signature when it was updated.

To remedy this, Francesco Banconi, one of my teammates wrote a Python script to analyze all of the components included in the application and their call signatures to ensure that every place it was being called, it was being called with the appropriate props.

We simply added this into our lint target in the Makefile so that CI will fail if a branch attempts to modify the call signatures without updated the rest of the application.

.PHONY: lint
lint: lint-python lint-components lint-js lint-css

.PHONY: lint-components
	 @./scripts/inspect-components validate --path jujugui/static/gui/src/ --short

The GUI is over 150k lines of code without dependencies and the script takes less than 300ms to run and outputs errors similar to the following allowing you to track them down quickly.

  instantiated at: /juju-gui/jujugui/static/gui/src/app/components/entity-details/entity-details.js:98
  defined at: /juju-gui/jujugui/static/gui/src/app/components/entity-details/content/content.js:21
  entityModel provided but not declared

component validation failed: 1 error found

We have found this hugely helpful to reduce errors in the shipped application with almost no overhead during development. Should we split this out into its own tool? Would you find this helpful in your own projects? Let me know @fromanegg Thanks for reading.

After writing some great content you’ll want people to be able to find it on the various search engines and easily share on social networks. This is done by using the proper HTML, microdata, and meta tags to allow search engines and social networks to correctly parse the content on your page and display the correct summary content.

You’ll first want to start with the original description and canonical tags as the basic fallback for anything parsing your page. The description tag is the summary description of the content on the page. The canonical tag is to tell the search engines what the canonical url for your post is in the event they arrived there under a different url.

<meta name="description" itemprop="description" content="Description of post">
<link rel="canonical" href="http://example.com/path/to/post"/>

HTML5 brought with it a number of new tags which we can use to better organize our markup and tell search engines a little more about the organization of our pages. The two most important for blog posts are the section and article tags. The section tag represents a generic section of the document, like a list of articles. The article tag represents the actual post content including the title, publication date, content, etc.

      <h1><a href="permalink">Post Title</a></h1>
      <time datetime="1970-01-01T00:00:00+00:00">January 1 1970</time>
      <span>Your name</span>
    Post content.

This doesn’t however tell the search engines what elements contain things like the link, published date, title, content, etc. To do this we need to rely on microdata and schema.org to fill in the blanks and describe the content in the markup.

Because this is a blog post we’ll start by labeling the actual blog post. By adding itemscope to the article tag you’re specifying that the content within the article tag is about a specific item, and the itemtype is the type of item you’re wrapping. in this case, a BlogPosting which has a list of sub tags that we can now define.

<article itemscope itemtype="http://schema.org/BlogPosting"></article>

The title is defined using the name tag and the permalink is defined using the url tag.

<h1 itemprop="name"><a href="permalink" itemprop="url">Post Title</a></h1>

To indicate the date that the content was first published we use the datePublished tag supplying the datetime value.

<time pubdate itemprop="datePublished" content="1970-01-01T00:00:00+00:00" datetime="1970-01-01T00:00:00+00:00">January 1 1970</time>

the author has a number of various implementations including one in HTML5, the rel tag. In microdata it’s the author tag.

 <span itemprop="author">Your name</span>

Now moving beyond the microdata and schema.org definitions, to enable the best sharing experience on the social networks you’ll want to set up Twitter cards and Facebook Open Graph data.

The Twitter card metadata fields are as follows (taken from their documentation):

<meta name="twitter:card" content="summary" />
<meta name="twitter:site" content="@flickr" />
<meta name="twitter:title" content="Small Island Developing States Photo Submission" />
<meta name="twitter:description" content="View the album on Flickr." />
<meta name="twitter:image" content="https://farm6.staticflickr.com/5510/14338202952_93595258ff_z.jpg" />

And the Facebook Open Graph data meta fields are (taken from their documentation):

<meta property="og:url" content="http://www.example.com/post/1234" />
<meta property="og:type" content="article" />
<meta property="og:title" content="When Great Minds Don’t Think Alike" />
<meta property="og:description" content="How much does culture influence creative thinking?" />
<meta property="og:image" content="http://example.com/image.jpg" />
<meta property="article:published_time" content="1970-01-01T00:00:00+00:00">

Thanks for reading, do you have any other tips for proper blog markup? Let me know below!