This site (as well as my personal one) is built on Hugo and hosted on S3. I love this combination for a number of reasons:

  1. It’s incredibly fast to build. (This site, as I’m writing it, builds in 26 ms. Milliseconds. That’s insane.)
  2. It’s incredibly cheap. My S3 hosting costs are less than $5 a month.
  3. Deployment is easy. Each post is a flat .md file, and it takes two CLI commands to build and deploy the site.

Let’s talk about that third one a bit, because S3 gets an unfair amount of flak for being a little rough around the edges.

As far as I’m concerned, the S3 tool to end all S3 tools is s3cmd – it’s free, it’s been around forever, and it’s incredibly powerful.

Once you download it and set it up, the command to sync a folder (in this case the public/ folder generated by Hugo once you build the site) with an S3 bucket is actually pretty simple:

s3cmd sync ./ s3:// --acl-public --recursive --skip-existing

There’s a bunch of stuff going on here:

  • We specify the current folder to sync with ./, rather than having it be implicit, to speed up the operation.
  • We specify the destination bucket with s3:// ( is my bucket name, so just replace that with whatever yours is.)
  • We specify --acl-public so that all of the uploaded files are accessible to the public.
  • We specify --recursive so that it syncs everything in that folder.
  • We specify --skip-existing so that we don’t unnecessarily reupload files that haven’t changed (this is one of the coolest parts of the entire process – it means that when I write a new post I don’t have to reupload the entire site, just a couple pages!)

But that’s it. You should be able to take this command and use it for your Hugo deployment as well. Have fun!

Liked this post? Follow me!