After writing some great content you’ll want people to be able to find it on the various search engines and easily share on social networks. This is done by using the proper HTML, microdata, and meta tags to allow search engines and social networks to correctly parse the content on your page and display the correct summary content.

You’ll first want to start with the original description and canonical tags as the basic fallback for anything parsing your page. The description tag is the summary description of the content on the page. The canonical tag is to tell the search engines what the canonical url for your post is in the event they arrived there under a different url.

<meta name="description" itemprop="description" content="Description of post">
<link rel="canonical" href="http://example.com/path/to/post"/>

HTML5 brought with it a number of new tags which we can use to better organize our markup and tell search engines a little more about the organization of our pages. The two most important for blog posts are the section and article tags. The section tag represents a generic section of the document, like a list of articles. The article tag represents the actual post content including the title, publication date, content, etc.

<section>
  <article>
    <header>
      <h1><a href="permalink">Post Title</a></h1>
      <time datetime="1970-01-01T00:00:00+00:00">January 1 1970</time>
      <span>Your name</span>
    </header>
    Post content.
  </article>
</section>

This doesn’t however tell the search engines what elements contain things like the link, published date, title, content, etc. To do this we need to rely on microdata and schema.org to fill in the blanks and describe the content in the markup.

Because this is a blog post we’ll start by labeling the actual blog post. By adding itemscope to the article tag you’re specifying that the content within the article tag is about a specific item, and the itemtype is the type of item you’re wrapping. in this case, a BlogPosting which has a list of sub tags that we can now define.

<article itemscope itemtype="http://schema.org/BlogPosting"></article>

The title is defined using the name tag and the permalink is defined using the url tag.

<h1 itemprop="name"><a href="permalink" itemprop="url">Post Title</a></h1>

To indicate the date that the content was first published we use the datePublished tag supplying the datetime value.

<time pubdate itemprop="datePublished" content="1970-01-01T00:00:00+00:00" datetime="1970-01-01T00:00:00+00:00">January 1 1970</time>

the author has a number of various implementations including one in HTML5, the rel tag. In microdata it’s the author tag.

 <span itemprop="author">Your name</span>

Now moving beyond the microdata and schema.org definitions, to enable the best sharing experience on the social networks you’ll want to set up Twitter cards and Facebook Open Graph data.

The Twitter card metadata fields are as follows (taken from their documentation):

<meta name="twitter:card" content="summary" />
<meta name="twitter:site" content="@flickr" />
<meta name="twitter:title" content="Small Island Developing States Photo Submission" />
<meta name="twitter:description" content="View the album on Flickr." />
<meta name="twitter:image" content="https://farm6.staticflickr.com/5510/14338202952_93595258ff_z.jpg" />

And the Facebook Open Graph data meta fields are (taken from their documentation):

<meta property="og:url" content="http://www.example.com/post/1234" />
<meta property="og:type" content="article" />
<meta property="og:title" content="When Great Minds Don’t Think Alike" />
<meta property="og:description" content="How much does culture influence creative thinking?" />
<meta property="og:image" content="http://example.com/image.jpg" />
<meta property="article:published_time" content="1970-01-01T00:00:00+00:00">

Thanks for reading, do you have any other tips for proper blog markup? Let me know below!

At the time of writing, this blog is hosted on GitHub and they do not support serving https on custom domains. But because there are many reasons why every website should be hosted under https this guide will show you how I got https for this blog on GitHub.

First you’ll need a couple things:

Then follow these steps:

  1. After signing up with Cloudflare you’ll be prompted to add your domain, at which point it’ll scan your DNS records automatically. You’ll want to make sure that it has all of them and that they are correct by cross referencing them with your current DNS provider.
  2. Switch to the Crypto tab and change the SSL type to Flexible.
  3. Update the Nameservers at your domain registrar to point to the ones provided by Cloudflare in your setup steps.
  4. Redirect all of your http traffic to https traffic using Cloudflare by adding a Page Rule. You’ll want to add a rule which looks like http://*example.com/* and then add a setting for Always Use HTTPS. After clicking Save and Deploy all requests to the http version of your site will be 301 redirected to the https version of your site. Cloudflare knowledge base
  5. Set up a canonical url on each page so that web crawlers will know that any path which gets the user to the site that the canonical url is the primary path that should be stored. To do that add <link rel="canonical" href="http://example.com/path/to/post"/> to the head of each page.
  6. Update the paths for your assets so that they are requested from the https path or browsers won’t load them.

Once the DNS records propagate you’ll be able to visit your website by visiting https://example.com. You might notice that you’ll still be using the github.com certificate for a bit still, I found that it took a few hours for a new certificate to be issued from Cloudflare.

The one caveat here is that the connection between Cloudflare and GitHub is not under https. However these steps will still protect your users from an unscrupulous ISP and users at coffee shops. But as GitHub themselves say “GitHub Pages sites shouldn’t be used for sensitive transactions like sending passwords or credit card numbers.” anyways.

I hope you enjoyed this post, if you have any questions or comments let me know on Twitter @fromanegg or comment below. Thanks for reading!

Welcome to v2 of the From An Egg blog!

Built with Hugo, a static site generator built with Go, v2 is considerably faster and nicer to look at than its v1 counterpart which was hosted by Tumblr. This new version is also being served over HTTPS and I’ll be releasing the theme sometime in the future. Thanks for stopping by!

Over the holidays I’ve been working on a small project playing with some of the new Javascript libraries that came out over the past year. After a while I noticed that the size of the Javascript I was sending to the client was growing and starting to approach 100 KB for a basic isomorphic website. I figured now was a good time to look into minification and compression.

The site starts out by loading in the following raw code:

Riot.js 64KB
Page.js 14KB
client code 14KB
Total 92KB

After Browserify was done rolling all of the code up into a single file it was ~92KB which was getting a little large for a website which basically did nothing. First step was to add minification to the Makefile using UglifyJs2.

$(CLIENT_MIN_ROLLUP): $(CLIENT_ROLLUP)
	$(UGLIFYJS) --screw-ie8 $^ -o $@

This step brought it down from 92KB to 44KB shaving off over 50% of the original size. This is still quite a lot of code for such a simple site so the next step is to add gzip compression for everything being sent. I am using expressjs 4.0 as the webserver so to add gzip it’s as easy as:

import compression from 'compression';
app.use(compression());

After adding gzip the data sent over the wire went down to an impressive 14KB. That’s only 15% of the original size, a savings of 78KB for a total of about 2 minutes worth of work. This really shows that no matter the size of your website the cost/bennefit of implementing even basic minification and compression is well worth it. If you have any questions or comments leave them in the comments below or mention me on twitter @fromanegg. Thanks for reading!

When writing code which needs to be built before it can be used, whether that’s a transpile step like ES7 JavaScript to ES5, or a compile step like with Go, you’re likely going to want to do this when a file in your application tree is modified. There are a large number of project and language specific tools which were developed to tackle this problem but did you know that there are system level packages available that you can use across all your projects?

Introducing inotifywait, which is an efficient and easy to use cli tool which uses Linux’s inotify interface to watch for changes to the file system. And fswatch for those on OSX. Most of the language and project specific tools are built using wrappers around these two tools.

If you’re like me and don’t like to add more build tools and layers of abstraction than necessary then you’re probably already using Make to build and develop your application, and you’ll be happy to know that using these tools with it is trivial. Make has no way to know when a file has changed until the next time you run make so because of this many have tried something like the following which will run the build target every 2s.

watch -n 2 make build

Or they will build the loop into the makefile.

.PHONY watch
watch:
  while true; do \
    make build --silent; \
    sleep 1; \
  done

This works, but it’s performing a lot of unneccesary work being run in a loop especially since the file system is able to tell us when a file or directory has been modified using inotify. Instead of automatically looping we wait for a file system event that we’re interested in and then run our build target. In the following code we’re able to create a make target in our makefile which will watch for file changes under our specified directory recursively.

.PHONY watch
watch:
  while true; do \
    inotifywait -qr -e modify -e create -e delete -e move app/src; \
    make build; \
  done

This works by creating an infinite loop which is started when you run the watch target. Before the first loop can finish it hits inotifywait which sets up listeners on all of the files and directories in your app/src directory. These listeners are waiting for any files to be modified, created, deleted, or moved in or out of app/src/…. When a file or directory changes inotifywait lets the loop continue triggering the call to your build target. That target executes in its entirety building only the file(s) which changed (assuming you’ve properly set up your makefile) and then the loop starts again with inotifywait waiting for those files to change again.

Using this technique will allow you to create an easy and efficient file change watcher for your makefile without too many additional tools. If you have any questions or comments leave them in the comments below or mention me on twitter @fromanegg. Thanks for reading!