When dealing with data that’s returned from an API you don’t always have control over the exact structure which can make it a bit of a challenge to add types to that data with TypeScript to improve the developer experience. I have recently run into this issue when hooking up the Juju Dashboard to a new API endpoint that sends deltas every time there is a change in the users environment.

The deltas are sent in the format [DeltaEntity, DeltaType, Delta], or more specifically [["application", "change", {...}], ["machine", "remove", {...}], ...]. The structure of the Delta portion of this tuple is determined by the combination of the DeltaEntity and DeltaType so when dealing with this data you first need to run a conditional over those type values to determine what to do with the Delta. In order to type these we can use a Type Predicate or a Discriminated Union. Ultimately I landed on using a discriminated union but I’ll run through both approaches here.

Using a Type Predicate

Define the type for the whole delta tuple.

type AllWatcherDelta = [DeltaEntity, DeltaType, Delta];

Define the possible values for the entity and type using a union. Yes, even the type predicate approach uses unions as a component of the typing approach. You also want to define the type for each delta type.

type DeltaEntity = "application" | "charm" | "unit";
type DeltaType = "change" | "remove";

interface UnitChangeDelta {
  name: string;
  ports: string[];

Now we need to create a function that will be used as the predicate in a conditional to determine what the data type is while working on the delta values. These functions will take, at a minimum, the data that you want to define the type for and any additional data you need to determine that type. It must then return a boolean value, if true then change in this function will be of type UnitChangeDelta, otherwise not.

function isUnitChangeDelta(
  delta: AllWatcherDelta,
  change: Delta
): change is UnitChangeDelta {
  return delta[0] === "unit" && delta[1] === "change";

Using this approach you would then have a string of conditionals checking for the appropriate type and within, would be of the expected type. In this example delta[2], the actual delta in our tuple will now be properly typed within the conditional as a UnitChangeDelta.

if (isUnitChangeDelta(delta, delta[2])) {

Using Discriminated Unions

Define all possible tuples of DeltaEntity, DeltaType, and Delta as a discriminated union.

type AllWatcherDelta =
  | ["unit", "change", UnitChangeDelta]
  | ["machine", "change", MachineChangeDelta]
  | ["application", "remove", ApplicationRemoveDelta];

Now when using the delta you need to check the values of each predicate manually and TypeScript will correctly type the data. Here I’ve used a switch statement but you could use any type of conditional.

switch (delta[0]) {
  case "unit":
    switch (delta[1]) {
      case "change":


Using only a Discriminated Union is great when you have a known set of data that you need to type. As the size or complexity of that data set increases you might want to evaluate using a Type Predicate as you can execute whatever code you need to determine the type including introspepction into the data itself if you needed to use a canary in the object as the flag.

If either of these approaches don’t quite get you there you can also look into adding Generics into the mix but that’ll have to wait for another time.

No matter the approach you take, having properly typed dynamic data dramatically reduces a certain set of errors and increases the productivity of the developer consuming that data so I feel it’s worth the investment.

Thanks for reading!

A significant portion of my day to day activities are based around code reviews and helping debug issues with other developers. When doing this I will usually open up all of the files that have changed between the current branch and master so that I can get an understanding of the problem as a whole. After a decade of doing this I finally got sick of doing it manually…There must be a better way! I wanted to check out a branch and open all of the changed files in vscode automatically. After a bit of poking around I found the git diff flag --name-only which returns the list of changed files. The command ended up like the following:

code -n . `git --no-pager diff --name-only master`

Command breakdown

code -n .

Opens up a new window of vscode with the current project folder open in the ‘explorer’ section.

git --no-pager

disables the pager that git uses by default in zsh (you may not need this flag)

--name-only master

displays a list of files that have changed from master.

Now the next logical step is to make this a bash command and add some customization. Below I’ve defined the command as a function with the ability to define an argument for the branch name and another to filter the list of files. I’ve found the filter to be especially useful when trying to avoid things like test metadata and lock file updates.

# vscodedevsync [branch] [filter regex]
function vscodedevsync() {
  code -n . `git --no-pager diff --name-only ${1:-master} | grep -o "$2"`

Anyways, hope this helps add a bit of efficiency into your day.

Running and developing an Ubuntu based workload on Mac OS has never been easier. Canonical has released a new tool called Multipass which allows you to quickly spin up Ubuntu Server virtual machines on Ubuntu, Mac OS and Windows. The following instructions will get you an Ubuntu Server VM up and running, and the Ubuntu file system mounted to Mac OS so that you can work in the Mac OS UI using your regular development tools like VS Code.

Mac OS

You’ll first need to get Multipass installed by visiting the Multipass website and downloading and installing the package on your Mac OS host. Once installed, open the terminal on your Mac OS host and run the following to download and install the latest LTS release of Ubuntu Server.

multipass launch --name ubuntu
multipass shell ubuntu


After the VM has been started we need to connect to it to install the nfs server.

sudo apt install nfs-kernel-server -y

Now we need to create the folder that we’re going to work from in the home directory of our new Ubuntu VM and open up the permissions on it.

mkdir -p ~/code
sudo chmod -R 777 ~/code

This folder needs to be exported from the VM’s file system which is done by appending the following content to the /etc/exports file. If your VM has a different IP range than what is shown below you can simply update the command below to match your environment.

echo "/home/ubuntu/code,fsid=0,insecure,no_subtree_check,all_squash,async,anonuid=1000,anongid=1000)" | sudo tee -a /etc/exports

Then we have to export the folder and restart the NFS service.

sudo exportfs -a
sudo service nfs-kernel-server restart

Create a temporary file so you can see if your mount worked successfully later.

touch ~/code/test

Mac OS

In another terminal window on your Mac OS host we need to mount our VM’s code folder. Replace the string for the VM’s IP address and the UserName of the user on Mac OS.

mkdir -p ~/code
sudo mount -t nfs <VM IP>:/home/ubuntu/code /Users/<UserName>/code

To keep the drive mounted after refreshing and restarting the VM

echo "<VM IP>:/home/ubuntu/code /Users/<UserName>/code nfs resvport,rw,rsize=8192,wsize=8192,timeo=14,intr" | sudo tee -a /etc/fstab

Now you should be able to see the test file that you created previously in Ubuntu from Mac OS.

ls -al ~/code

From the terminal on your Mac OS host you can now open these folders like they live in Mac OS with your code editor of choice.

code ~/code

Tips & Tricks

On a day to day basis the most efficient way to work with these files is to perform your heavy IO interactions like git clones and builds from within your new Ubuntu VM. This can be done by leaving a terminal open SSH’d into it.

The services that Multipass use to create the VM on Mac OS allow you to over subscribe the VM’s resources so if you want to have the fastest VM possible you can give it all CPU cores and RAM as well as ample disk space. The following command will give the new Ubuntu VM 16 cores, 100GB of disk space and 16GB of ram while allowing the host and other Multipass VM’s to share the same resources.

multipass launch -c 16 -d 100G -m 16G --name ubuntu

At the time of writing you cannot resize the VM’s disk space so you’ll want to give it more than you think you’ll need.

Whenever you write any code that is to be consumed by another, whether it be a library or some UI element, that consumer expects it to work in a certain way every time they interact with it. All good developers would agree and that’s why we also write tests that either break our code up into chunks and test that each chunk works as expected, unit tests, or test the entire lifecycle, end to end tests.

Anyone who has written unit tests for long enough knows that they are tedious to keep in sync with refactors and often end up taking a disproportionate amount of time compared to the time it took to write the functional code. I propose that that we focus less on unit tests and replace them with what I’m calling the user contract of your code.

What is a user contract?

The consumer expects that when they perform action X, they receive outcome Y. Typically they are not concerned about how X became Y just that it does so reliably. This is what I’m calling the user contract. If we as the authors of the code take the same view from a testing perspective, it allows us to write simpler tests and gives us the ability to refactor how a library or UI component works without having to update our tests, dramatically speeding up refactoring.

While these examples are written in Javascript the same techniques apply to all languages.

Library example

Starting with a simple library that another developer may be using…

export async function fetchUserList() {
  const userList = await _queryDBForUserList();
  return await _formatUserList(userList);

function _queryDBForUserList() {
  // Fetch content from the database.

function _formatuserList() {
  // Reformat the data as returned from the database.

A consumer of this API would have a couple expectations:

These expectations then outline what your tests are:

describe('fetchUserList', () => {
  it('does not block');
  it('returns in the correct format');

You should note that we don’t test any method that wasn’t exported, nor do we export methods simply for testing purposes.

To aid the user in understanding what this contract is you can outline it in the docblock for the exported function. This way it can be used to generate the documentation for your library and help outline what your test structure is.

  Returns a formatted user list.
  @return {Object} The user list in the following format:
  { id: INT, name: STRING, favouriteColour: STRING }
export async function fetchUserList() {
  const userList = await _queryDBForUserList();
  return await _formatUserList();

We don’t explicitly test the _queryFBForUSerList and _formatUserList functions as they are implementation details. If you were to change the type of database returning the user list, or the algorithm being used to format the user list you should not have to also modify your tests as the contract to your users has not changed. They still expect that if they call fetchUserList they will receive the list in the specified format.

UI Example

Let’s take a look at a UI component this time using the React javascript library, in an effort to save space I’ve removed the functions that aren’t exported. This also helps to illustrate their irrelevance in our testing strategy.

export const LogIn = ({ children }) => {
  const userIsLoggedIn = useSelector(isLoggedIn);
  const userIsConnecting = useSelector(isConnecting);

  const button = _generateButton(userIsConnecting);

  if (!userIsLoggedIn) {
    return (
        <div className="login">
          <img className="login__logo" src={logo} alt="logo" />
  return children;

This is a fairly simple component that renders a login button with a logo. Let’s go through the exercise and see what our User Contract is:

Our tests would be:

describe('LogIn', () => {
  describe('the user is not logged in', () => {
    it('renders a logo and button to log in');
    it('renders any children passed to it');
    it('clicking the button logs in');
  describe('the user is logged in', () => {
    it('does not render a logo and button to log in');
    it('renders any children passed to it');

Testing the returned value in a UI component is a little more nuanced than checking a return value of a library function. We don’t necessarily want to check every specific detail of each element returned unless it’s part of the contract. I’ll expand these tests with assertions but eschew the component setup and rendering in interest of space.

it('renders a logo and button to log in', () => {
  expect(wrapper.find('.login button').length).toBe(1);
it('renders any children passed to it', () => {
  expect(wrapper.find('main .items').length).toBe(3);
it('logs the user in', () => {
  wrapper.find('.login button').simulate('click', {});

It’s important to note here that we have tried to limit the specific details that aren’t relevant to the contract of the component. This allows the design to change and the contract to remain valid and we do not need to update the tests. This is especially beneficial when you have a shared component library within your company. You can update the designs and implementation details of your components without updating the tests.

What if I…


When writing the code and exporting methods, ask yourself if the user needs to have access to this method or if you’re only doing it for testing purposes. You can always export more methods, you can’t always take exported methods away.

When writing tests ask yourself how can the consumer interact with your code and what type of outcome is expected for those interactions and them make sure you have those documented and assertions in your tests.

Don’t test implementation details of an exported method or UI component. Consider moving those to a different user contract if you feel they need direct testing.

More reading

I have written some content many years ago which you may also find helpful:

An advantage to components, React or otherwise, is that they can be used multiple times in various contexts across your application. As the application grows and components get modified over time the component call signatures can drift from their original spec and the many call locations across the application can miss getting updated. Compound that with shallow rendering unit tests and you’ve got yourself a problem where parts of your application don’t work as expected because of invalid data being sent to the components, not because the components are broken themselves.

This was an issue that we ran into a few times with the Juju GUI. The Juju GUI is a web app with a large portion of it rendered using a number of parent React components which then render the appropriate children based on the state of the application. When one of the children needs access to a method or data, it’s possible that it will need to be passed from the top application down through the parent and various children. This caused issues where it was easy to miss updating a components call signature when it was updated.

To remedy this, Francesco Banconi, one of my teammates wrote a Python script to analyze all of the components included in the application and their call signatures to ensure that every place it was being called, it was being called with the appropriate props.

We simply added this into our lint target in the Makefile so that CI will fail if a branch attempts to modify the call signatures without updated the rest of the application.

.PHONY: lint
lint: lint-python lint-components lint-js lint-css

.PHONY: lint-components
	 @./scripts/inspect-components validate --path jujugui/static/gui/src/ --short

The GUI is over 150k lines of code without dependencies and the script takes less than 300ms to run and outputs errors similar to the following allowing you to track them down quickly.

  instantiated at: /juju-gui/jujugui/static/gui/src/app/components/entity-details/entity-details.js:98
  defined at: /juju-gui/jujugui/static/gui/src/app/components/entity-details/content/content.js:21
  entityModel provided but not declared

component validation failed: 1 error found

We have found this hugely helpful to reduce errors in the shipped application with almost no overhead during development. Should we split this out into its own tool? Would you find this helpful in your own projects? Let me know @fromanegg Thanks for reading.