Hosting a Static Website on CloudFront

I recently tried to use Amazon's CloudFront to host my static Jekyll-generated homepage. Here's a couple of reasons I really wanted to do that:

By "hack" I mean that it's not really it's purpose, so there are a couple of downsides:

This posts only covers the basic S3 & CloudFront setup you need to host a static website there. In a next post I'll show you how I sync my Jekyll generated files.

Create a S3 bucket

  1. In the AWS Console, go to the "Amazon S3" tab.
  2. Use the "Create Bucket" button to create a bucket named MYBUCKET.
  3. Right click on your newly created bucket and bring the properties panel.
  4. (Optional 2) Make your bucket public-readable by clicking the "Edit bucket policy" in the "Permissions" tab and adding the following code (don't forget to change MYBUCKET to your bucket name):
           "Principal": {
       "AWS": "*"

Configure a CloudFront distribution

  1. In the AWS Console, go to the "Amazon CloudFront" tab.
  2. Click on the "Create Distribution" button.
  3. Select the MYBUCKET bucket we created earlier as the origin.
  4. Specify the CNAMEs your site will be hosting (your site domain name).
  5. Back in the CloudFront distributions list, select your newly created distribution and copy it's "Domain Name".

Edit domain's DNS records

  1. Go to your domain's DNS record manager.
  2. Set your domain or subdomain so it points to your CloudFront distribution domain name as a CNAME record 3.
  3. When your DNS finally refreshes (remember, it can be long), you should be able to access your bucket by using your domain or subdomain.

Setting the DefaultRootObject

The last step is the fun part, since it requires you to run some crafty ruby code.

Since there's no support (yet) in the AWS Console to set the DefaultRootObject (and it's not present in a lot of S3/CloudFront software neither), I've written a small ruby script to allow you to enable it on your distribution.

Here's the code:

require 'rubygems'
require 'hmac-sha1'
require 'net/https'
require 'base64'


newobj = ARGV[0]

if newobj == nil
  puts "usage: aws_cf_setroot.rb index.html"

date =
date = date.strftime("%a, %d %b %Y %H:%M:%S %Z")
digest =
digest << date

uri = URI.parse('' + cf_distribution + '/config')

req =

  'x-amz-date' => date,
  'Content-Type' => 'text/xml',
  'Authorization' => "AWS %s:%s" % [s3_access, Base64.encode64(digest.digest)]

http =, uri.port)
http.use_ssl = true
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
res = http.request(req)

currentobj = /<DefaultRootObject>(.*?)<\/DefaultRootObject>/.match(res.body)[1]

if newobj == currentobj
  puts "'#{currentobj}' is already the DefaultRootObject"

etag = res.header['etag']

req =

  'x-amz-date' => date,
  'Content-Type' => 'text/xml',
  'Authorization' => "AWS %s:%s" % [s3_access, Base64.encode64(digest.digest)],
  'If-Match' => etag

if currentobj == nil
  regex = /<\/DistributionConfig>/
  replace = "<DefaultRootObject>#{newobj}</DefaultRootObject></DistributionConfig>"
  regex = /<DefaultRootObject>(.*?)<\/DefaultRootObject>/
  replace = "<DefaultRootObject>#{newobj}</DefaultRootObject>"

req.body = res.body.gsub(regex, replace)

http =, uri.port)
http.use_ssl = true
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
res = http.request(req)

puts res.code
puts res.body

You need to edit 3 configuration lines at the top of the file:

  1. Set s3_access to your S3 access key
  2. Set s3_secret to your S3 secret key
  3. Set cf_distribution to your CloudFront distribution ID (get it in the properties pane of your "Amazon CloudFront" tab in the AWS Console)

You are now ready to set your DefaultRootObject by running the script with the name of the file you want for root object as parameter:

$ ruby aws_cf_setroot.rb index.html

index.html is now the default root object (meaning it won't work for subfolders). There may be a small delay before it works, but if you go to in a web browser, you should be shown your default index.html page.

  1. That could be fixed with a second bucket with the exact same html files that meta refreshes the page to the correct URL. That's overkill, but it should be good enough for basic SEO.
  2. You could skip this step, but you need to make sure you override the ACLs every time you update your site.
  3. Not all DNS providers can do that. I'm currently using which allows it.