July 13th, 2016
The setup for this site is fairly basic. I use Jekyll to generate the site statically,
rsync it to a VPS, and use
nginx to host the content straight off the site. No dynamic content, everything happens clientside.
Because the setup is so simple, loading pages ends up being pretty fast out of the box. I use some poor man’s asset management to ensure that images are cached, but until today, my configuration had very few bells and whistles.
My team has been investigating the impact of potential performance improvements on the web, and since I only code off-the-clock these days, I wanted to try my hand with some newer technologies, too.
HTTPS / LetsEncrypt
HTTPS isn’t a performance improvement directly, but since browsers don’t support HTTP/2 without an encrypted connection, I needed to have an SSL certificate for this page. Also, who runs a website that’s not encrypted? HTTPS: Not just for shopping carts anymore.
I’ve added SSL to a variety of sites before, and it’s a pain. LetsEncrypt is a site that popped up within the last year that lets you generate SSL certificates for free and allegedly-painlessly. The existing documentation for LetsEncrypt was surprisingly unhelpful. There are several ways to verify ownership of a domain, but none of them were documented on the site, and the documentation for the recommended
certbot didn’t explain what it was doing under the hood. I ended up using a DigitalOcean tutorial instead. Even then, it felt pretty hacky to run so much as root.
Once I got the LetsEncrypt certificates up and running, setting up auto-renewal was a piece of cake. Excellent. Theoretically, I’ll never have to worry about an expiring certificate again!
In running a few tests on WebPagetest, I found that I wasn’t even compressing text over the wire with something like
gzip. This isn’t necessary with HTTP/2, as HTTP/2 connections (including headers) are always compressed, but it’s good hygiene and benefits those whose browsers don’t support HTTP/2. Rookie mistake.
HTTP/2 is the long-awaited protocol update for HTTP that is recently broadly-available. It’s supported by most modern browsers and allows multiplexed connections per domain and has compression built into the protocol. The HTTP/1.1 spec technically only allows two connections per domain, but most browsers don’t conform to that. Still, Chrome only allows six connections per domain, and it’s common that one is trying to download more than six files at once from a domain.
Here’s what the network graph looks like in HTTP/1.1 on one of the meatier pages on this site:
And here’s what it looks like on HTTP/2:
Look at how sweet and parallelized that is. Disqus eats most of the time it takes to load the page fully, but you could imagine having lots more images, JS, and CSS to load and really seeing the benefits of a multiplexed HTTP/2 connection.
Setting up HTTP/2 wasn’t as easy as it seemed. According to this post, all I needed was
nginx >= v1.9.5 and adding
http2 to my
listen directive. When I added the mainline
nginx PPA and upgraded to v1.11, a quick check on HTTP/2 Test confirmed that HTTP/2 was enabled… but it wasn’t on Chrome. You can see HTTP/2 sessions in
chrome://net-internals/#http2, and I didn’t notice this site show up while browsing it, and I wasn’t seeing any of that sweet parallelization in the graphs.
As it turns out, Firefox would happily use HTTP/2 in this setup, but Chrome won’t because it only supports the most recent version of connection negotiation (ALPN), not the deprecated one (NAN). The reason (explained very well here) is that the version of
nginx you find in the mainline PPA is the most-recent code, but it’s built against an older version of
OpenSSL that doesn’t support ALPN. Go figure. I didn’t want to build
nginx myself, so I found a different PPA that was explicitly compiled against a newer version of
This really shouldn’t have been as complicated as it was, but it’s a new technology, and I’m glad I went through the pain required to set it all up. Enjoy encryption and slightly faster browsing on mattspitz.me!