Whenever you write any code that is to be consumed by another, whether it be a library or some UI element, that consumer expects it to work in a certain way every time they interact with it. All good developers would agree and that’s why we also write tests that either break our code up into chunks and test that each chunk works as expected, unit tests, or test the entire lifecycle, end to end tests.

Anyone who has written unit tests for long enough knows that they are tedious to keep in sync with refactors and often end up taking a disproportionate amount of time compared to the time it took to write the functional code. I propose that that we focus less on unit tests and replace them with what I’m calling the user contract of your code.

What is a user contract?

The consumer expects that when they perform action X, they receive outcome Y. Typically they are not concerned about how X became Y just that it does so reliably. This is what I’m calling the user contract. If we as the authors of the code take the same view from a testing perspective, it allows us to write simpler tests and gives us the ability to refactor how a library or UI component works without having to update our tests, dramatically speeding up refactoring.

While these examples are written in Javascript the same techniques apply to all languages.

Library example

Starting with a simple library that another developer may be using…

export async function fetchUserList() {
  const userList = await _queryDBForUserList();
  return await _formatUserList(userList);
}

function _queryDBForUserList() {
  // Fetch content from the database.
}

function _formatuserList() {
  // Reformat the data as returned from the database.
}

A consumer of this API would have a couple expectations:

These expectations then outline what your tests are:

describe('fetchUserList', () => {
  it('does not block');
  it('returns in the correct format');
});

You should note that we don’t test any method that wasn’t exported, nor do we export methods simply for testing purposes.

To aid the user in understanding what this contract is you can outline it in the docblock for the exported function. This way it can be used to generate the documentation for your library and help outline what your test structure is.

/**
  Returns a formatted user list.
  @return {Object} The user list in the following format:
  { id: INT, name: STRING, favouriteColour: STRING }
*/
export async function fetchUserList() {
  const userList = await _queryDBForUserList();
  return await _formatUserList();
}

We don’t explicitly test the _queryFBForUSerList and _formatUserList functions as they are implementation details. If you were to change the type of database returning the user list, or the algorithm being used to format the user list you should not have to also modify your tests as the contract to your users has not changed. They still expect that if they call fetchUserList they will receive the list in the specified format.

UI Example

Let’s take a look at a UI component this time using the React javascript library, in an effort to save space I’ve removed the functions that aren’t exported. This also helps to illustrate their irrelevance in our testing strategy.

export const LogIn = ({ children }) => {
  const userIsLoggedIn = useSelector(isLoggedIn);
  const userIsConnecting = useSelector(isConnecting);

  const button = _generateButton(userIsConnecting);

  if (!userIsLoggedIn) {
    return (
      <>
        <div className="login">
          <img className="login__logo" src={logo} alt="logo" />
          {button}
        </div>
        <main>{children}</main>
      </>
    );
  }
  return children;
};

This is a fairly simple component that renders a login button with a logo. Let’s go through the exercise and see what our User Contract is:

Our tests would be:

describe('LogIn', () => {
  describe('the user is not logged in', () => {
    it('renders a logo and button to log in');
    it('renders any children passed to it');
    it('clicking the button logs in');
  });
  describe('the user is logged in', () => {
    it('does not render a logo and button to log in');
    it('renders any children passed to it');
  });
});

Testing the returned value in a UI component is a little more nuanced than checking a return value of a library function. We don’t necessarily want to check every specific detail of each element returned unless it’s part of the contract. I’ll expand these tests with assertions but eschew the component setup and rendering in interest of space.

it('renders a logo and button to log in', () => {
  expect(wrapper.find('.login__logo').length).toBe(1);
  expect(wrapper.find('.login button').length).toBe(1);
);
it('renders any children passed to it', () => {
  expect(wrapper.find('main .items').length).toBe(3);
});
it('logs the user in', () => {
  wrapper.find('.login button').simulate('click', {});
  expect(useSelector(isLoggedIn)).toBe(true);
});

It’s important to note here that we have tried to limit the specific details that aren’t relevant to the contract of the component. This allows the design to change and the contract to remain valid and we do not need to update the tests. This is especially beneficial when you have a shared component library within your company. You can update the designs and implementation details of your components without updating the tests.

What if I…

Conclusion

When writing the code and exporting methods, ask yourself if the user needs to have access to this method or if you’re only doing it for testing purposes. You can always export more methods, you can’t always take exported methods away.

When writing tests ask yourself how can the consumer interact with your code and what type of outcome is expected for those interactions and them make sure you have those documented and assertions in your tests.

Don't test implementation details of an exported method or UI component. Consider moving those to a different user contract if you feel they need direct testing.

More reading

I have written some content many years ago which you may also find helpful:

An advantage to components, React or otherwise, is that they can be used multiple times in various contexts across your application. As the application grows and components get modified over time the component call signatures can drift from their original spec and the many call locations across the application can miss getting updated. Compound that with shallow rendering unit tests and you’ve got yourself a problem where parts of your application don’t work as expected because of invalid data being sent to the components, not because the components are broken themselves.

This was an issue that we ran into a few times with the Juju GUI. The Juju GUI is a web app with a large portion of it rendered using a number of parent React components which then render the appropriate children based on the state of the application. When one of the children needs access to a method or data, it’s possible that it will need to be passed from the top application down through the parent and various children. This caused issues where it was easy to miss updating a components call signature when it was updated.

To remedy this, Francesco Banconi, one of my teammates wrote a Python script to analyze all of the components included in the application and their call signatures to ensure that every place it was being called, it was being called with the appropriate props.

We simply added this into our lint target in the Makefile so that CI will fail if a branch attempts to modify the call signatures without updated the rest of the application.

.PHONY: lint
lint: lint-python lint-components lint-js lint-css

.PHONY: lint-components
lint-components:
	 @./scripts/inspect-components validate --path jujugui/static/gui/src/ --short

The GUI is over 150k lines of code without dependencies and the script takes less than 300ms to run and outputs errors similar to the following allowing you to track them down quickly.

EntityContent:
  instantiated at: /juju-gui/jujugui/static/gui/src/app/components/entity-details/entity-details.js:98
  defined at: /juju-gui/jujugui/static/gui/src/app/components/entity-details/content/content.js:21
  entityModel provided but not declared

component validation failed: 1 error found

We have found this hugely helpful to reduce errors in the shipped application with almost no overhead during development. Should we split this out into its own tool? Would you find this helpful in your own projects? Let me know @fromanegg Thanks for reading.

After writing some great content you'll want people to be able to find it on the various search engines and easily share on social networks. This is done by using the proper HTML, microdata, and meta tags to allow search engines and social networks to correctly parse the content on your page and display the correct summary content.

You'll first want to start with the original description and canonical tags as the basic fallback for anything parsing your page. The description tag is the summary description of the content on the page. The canonical tag is to tell the search engines what the canonical url for your post is in the event they arrived there under a different url.

<meta name="description" itemprop="description" content="Description of post">
<link rel="canonical" href="http://example.com/path/to/post"/>

HTML5 brought with it a number of new tags which we can use to better organize our markup and tell search engines a little more about the organization of our pages. The two most important for blog posts are the section and article tags. The section tag represents a generic section of the document, like a list of articles. The article tag represents the actual post content including the title, publication date, content, etc.

<section>
  <article>
    <header>
      <h1><a href="permalink">Post Title</a></h1>
      <time datetime="1970-01-01T00:00:00+00:00">January 1 1970</time>
      <span>Your name</span>
    </header>
    Post content.
  </article>
</section>

This doesn't however tell the search engines what elements contain things like the link, published date, title, content, etc. To do this we need to rely on microdata and schema.org to fill in the blanks and describe the content in the markup.

Because this is a blog post we'll start by labeling the actual blog post. By adding itemscope to the article tag you're specifying that the content within the article tag is about a specific item, and the itemtype is the type of item you're wrapping. in this case, a BlogPosting which has a list of sub tags that we can now define.

<article itemscope itemtype="http://schema.org/BlogPosting"></article>

The title is defined using the name tag and the permalink is defined using the url tag.

<h1 itemprop="name"><a href="permalink" itemprop="url">Post Title</a></h1>

To indicate the date that the content was first published we use the datePublished tag supplying the datetime value.

<time pubdate itemprop="datePublished" content="1970-01-01T00:00:00+00:00" datetime="1970-01-01T00:00:00+00:00">January 1 1970</time>

the author has a number of various implementations including one in HTML5, the rel tag. In microdata it's the author tag.

 <span itemprop="author">Your name</span>

Now moving beyond the microdata and schema.org definitions, to enable the best sharing experience on the social networks you'll want to set up Twitter cards and Facebook Open Graph data.

The Twitter card metadata fields are as follows (taken from their documentation):

<meta name="twitter:card" content="summary" />
<meta name="twitter:site" content="@flickr" />
<meta name="twitter:title" content="Small Island Developing States Photo Submission" />
<meta name="twitter:description" content="View the album on Flickr." />
<meta name="twitter:image" content="https://farm6.staticflickr.com/5510/14338202952_93595258ff_z.jpg" />

And the Facebook Open Graph data meta fields are (taken from their documentation):

<meta property="og:url" content="http://www.example.com/post/1234" />
<meta property="og:type" content="article" />
<meta property="og:title" content="When Great Minds Don’t Think Alike" />
<meta property="og:description" content="How much does culture influence creative thinking?" />
<meta property="og:image" content="http://example.com/image.jpg" />
<meta property="article:published_time" content="1970-01-01T00:00:00+00:00">

Thanks for reading, do you have any other tips for proper blog markup? Let me know below!

At the time of writing, this blog is hosted on GitHub and they do not support serving https on custom domains. But because there are many reasons why every website should be hosted under https this guide will show you how I got https for this blog on GitHub.

First you'll need a couple things:

Then follow these steps:

  1. After signing up with Cloudflare you'll be prompted to add your domain, at which point it'll scan your DNS records automatically. You'll want to make sure that it has all of them and that they are correct by cross referencing them with your current DNS provider.
  2. Switch to the Crypto tab and change the SSL type to Flexible.
  3. Update the Nameservers at your domain registrar to point to the ones provided by Cloudflare in your setup steps.
  4. Redirect all of your http traffic to https traffic using Cloudflare by adding a Page Rule. You'll want to add a rule which looks like http://*example.com/* and then add a setting for Always Use HTTPS. After clicking Save and Deploy all requests to the http version of your site will be 301 redirected to the https version of your site. Cloudflare knowledge base
  5. Set up a canonical url on each page so that web crawlers will know that any path which gets the user to the site that the canonical url is the primary path that should be stored. To do that add <link rel="canonical" href="http://example.com/path/to/post"/> to the head of each page.
  6. Update the paths for your assets so that they are requested from the https path or browsers won't load them.

Once the DNS records propagate you'll be able to visit your website by visiting https://example.com. You might notice that you'll still be using the github.com certificate for a bit still, I found that it took a few hours for a new certificate to be issued from Cloudflare.

The one caveat here is that the connection between Cloudflare and GitHub is not under https. However these steps will still protect your users from an unscrupulous ISP and users at coffee shops. But as GitHub themselves say “GitHub Pages sites shouldn't be used for sensitive transactions like sending passwords or credit card numbers.” anyways.

I hope you enjoyed this post, if you have any questions or comments let me know on Twitter @fromanegg or comment below. Thanks for reading!

Welcome to v2 of the From An Egg blog!

Built with Hugo, a static site generator built with Go, v2 is considerably faster and nicer to look at than its v1 counterpart which was hosted by Tumblr. This new version is also being served over HTTPS and I'll be releasing the theme sometime in the future. Thanks for stopping by!