An advantage to components, React or otherwise, is that they can be used multiple times in various contexts across your application. As the application grows and components get modified over time the component call signatures can drift from their original spec and the many call locations across the application can miss getting updated. Compound that with shallow rendering unit tests and you’ve got yourself a problem where parts of your application don’t work as expected because of invalid data being sent to the components, not because the components are broken themselves.

This was an issue that we ran into a few times with the Juju GUI. The Juju GUI is a web app with a large portion of it rendered using a number of parent React components which then render the appropriate children based on the state of the application. When one of the children needs access to a method or data, it’s possible that it will need to be passed from the top application down through the parent and various children. This caused issues where it was easy to miss updating a components call signature when it was updated.

To remedy this, Francesco Banconi, one of my teammates wrote a Python script to analyze all of the components included in the application and their call signatures to ensure that every place it was being called, it was being called with the appropriate props.

We simply added this into our lint target in the Makefile so that CI will fail if a branch attempts to modify the call signatures without updated the rest of the application.

.PHONY: lint
lint: lint-python lint-components lint-js lint-css

.PHONY: lint-components
lint-components:
	 @./scripts/inspect-components validate --path jujugui/static/gui/src/ --short

The GUI is over 150k lines of code without dependencies and the script takes less than 300ms to run and outputs errors similar to the following allowing you to track them down quickly.

EntityContent:
  instantiated at: /juju-gui/jujugui/static/gui/src/app/components/entity-details/entity-details.js:98
  defined at: /juju-gui/jujugui/static/gui/src/app/components/entity-details/content/content.js:21
  entityModel provided but not declared

component validation failed: 1 error found

We have found this hugely helpful to reduce errors in the shipped application with almost no overhead during development. Should we split this out into its own tool? Would you find this helpful in your own projects? Let me know @fromanegg Thanks for reading.

After writing some great content you’ll want people to be able to find it on the various search engines and easily share on social networks. This is done by using the proper HTML, microdata, and meta tags to allow search engines and social networks to correctly parse the content on your page and display the correct summary content.

You’ll first want to start with the original description and canonical tags as the basic fallback for anything parsing your page. The description tag is the summary description of the content on the page. The canonical tag is to tell the search engines what the canonical url for your post is in the event they arrived there under a different url.

<meta name="description" itemprop="description" content="Description of post">
<link rel="canonical" href="http://example.com/path/to/post"/>

HTML5 brought with it a number of new tags which we can use to better organize our markup and tell search engines a little more about the organization of our pages. The two most important for blog posts are the section and article tags. The section tag represents a generic section of the document, like a list of articles. The article tag represents the actual post content including the title, publication date, content, etc.

<section>
  <article>
    <header>
      <h1><a href="permalink">Post Title</a></h1>
      <time datetime="1970-01-01T00:00:00+00:00">January 1 1970</time>
      <span>Your name</span>
    </header>
    Post content.
  </article>
</section>

This doesn’t however tell the search engines what elements contain things like the link, published date, title, content, etc. To do this we need to rely on microdata and schema.org to fill in the blanks and describe the content in the markup.

Because this is a blog post we’ll start by labeling the actual blog post. By adding itemscope to the article tag you’re specifying that the content within the article tag is about a specific item, and the itemtype is the type of item you’re wrapping. in this case, a BlogPosting which has a list of sub tags that we can now define.

<article itemscope itemtype="http://schema.org/BlogPosting"></article>

The title is defined using the name tag and the permalink is defined using the url tag.

<h1 itemprop="name"><a href="permalink" itemprop="url">Post Title</a></h1>

To indicate the date that the content was first published we use the datePublished tag supplying the datetime value.

<time pubdate itemprop="datePublished" content="1970-01-01T00:00:00+00:00" datetime="1970-01-01T00:00:00+00:00">January 1 1970</time>

the author has a number of various implementations including one in HTML5, the rel tag. In microdata it’s the author tag.

 <span itemprop="author">Your name</span>

Now moving beyond the microdata and schema.org definitions, to enable the best sharing experience on the social networks you’ll want to set up Twitter cards and Facebook Open Graph data.

The Twitter card metadata fields are as follows (taken from their documentation):

<meta name="twitter:card" content="summary" />
<meta name="twitter:site" content="@flickr" />
<meta name="twitter:title" content="Small Island Developing States Photo Submission" />
<meta name="twitter:description" content="View the album on Flickr." />
<meta name="twitter:image" content="https://farm6.staticflickr.com/5510/14338202952_93595258ff_z.jpg" />

And the Facebook Open Graph data meta fields are (taken from their documentation):

<meta property="og:url" content="http://www.example.com/post/1234" />
<meta property="og:type" content="article" />
<meta property="og:title" content="When Great Minds Don’t Think Alike" />
<meta property="og:description" content="How much does culture influence creative thinking?" />
<meta property="og:image" content="http://example.com/image.jpg" />
<meta property="article:published_time" content="1970-01-01T00:00:00+00:00">

Thanks for reading, do you have any other tips for proper blog markup? Let me know below!

At the time of writing, this blog is hosted on GitHub and they do not support serving https on custom domains. But because there are many reasons why every website should be hosted under https this guide will show you how I got https for this blog on GitHub.

First you’ll need a couple things:

Then follow these steps:

  1. After signing up with Cloudflare you’ll be prompted to add your domain, at which point it’ll scan your DNS records automatically. You’ll want to make sure that it has all of them and that they are correct by cross referencing them with your current DNS provider.
  2. Switch to the Crypto tab and change the SSL type to Flexible.
  3. Update the Nameservers at your domain registrar to point to the ones provided by Cloudflare in your setup steps.
  4. Redirect all of your http traffic to https traffic using Cloudflare by adding a Page Rule. You’ll want to add a rule which looks like http://*example.com/* and then add a setting for Always Use HTTPS. After clicking Save and Deploy all requests to the http version of your site will be 301 redirected to the https version of your site. Cloudflare knowledge base
  5. Set up a canonical url on each page so that web crawlers will know that any path which gets the user to the site that the canonical url is the primary path that should be stored. To do that add <link rel="canonical" href="http://example.com/path/to/post"/> to the head of each page.
  6. Update the paths for your assets so that they are requested from the https path or browsers won’t load them.

Once the DNS records propagate you’ll be able to visit your website by visiting https://example.com. You might notice that you’ll still be using the github.com certificate for a bit still, I found that it took a few hours for a new certificate to be issued from Cloudflare.

The one caveat here is that the connection between Cloudflare and GitHub is not under https. However these steps will still protect your users from an unscrupulous ISP and users at coffee shops. But as GitHub themselves say “GitHub Pages sites shouldn’t be used for sensitive transactions like sending passwords or credit card numbers.” anyways.

I hope you enjoyed this post, if you have any questions or comments let me know on Twitter @fromanegg or comment below. Thanks for reading!

Welcome to v2 of the From An Egg blog!

Built with Hugo, a static site generator built with Go, v2 is considerably faster and nicer to look at than its v1 counterpart which was hosted by Tumblr. This new version is also being served over HTTPS and I’ll be releasing the theme sometime in the future. Thanks for stopping by!

Over the holidays I’ve been working on a small project playing with some of the new Javascript libraries that came out over the past year. After a while I noticed that the size of the Javascript I was sending to the client was growing and starting to approach 100 KB for a basic isomorphic website. I figured now was a good time to look into minification and compression.

The site starts out by loading in the following raw code:

Riot.js 64KB
Page.js 14KB
client code 14KB
Total 92KB

After Browserify was done rolling all of the code up into a single file it was ~92KB which was getting a little large for a website which basically did nothing. First step was to add minification to the Makefile using UglifyJs2.

$(CLIENT_MIN_ROLLUP): $(CLIENT_ROLLUP)
	$(UGLIFYJS) --screw-ie8 $^ -o [email protected]

This step brought it down from 92KB to 44KB shaving off over 50% of the original size. This is still quite a lot of code for such a simple site so the next step is to add gzip compression for everything being sent. I am using expressjs 4.0 as the webserver so to add gzip it’s as easy as:

import compression from 'compression';
app.use(compression());

After adding gzip the data sent over the wire went down to an impressive 14KB. That’s only 15% of the original size, a savings of 78KB for a total of about 2 minutes worth of work. This really shows that no matter the size of your website the cost/bennefit of implementing even basic minification and compression is well worth it. If you have any questions or comments leave them in the comments below or mention me on twitter @fromanegg. Thanks for reading!