\n
\n\n
\nA step-by-step guide on how I configured and hosted a secure static site using AWS.
\nThis project was originally completed .
\nThis writing was last updated 2016-10-12.
\n\n{
"type": [
"h-entry"
],
"properties": {
"name": [
"Setting up CloudFront and TLS (HTTPS) with Jekyll "
],
"published": [
"2016-02-13T09:25:50Z"
],
"category": [
"cloudfront",
"aws",
"https",
"jekyll"
],
"bookmark-of": [
{
"type": [
"h-cite"
],
"properties": {
"url": [
"https://olivermak.es/2016/01/aws-tls-certificate-with-jekyll/"
],
"name": [
"Setting up CloudFront and TLS (HTTPS) with Jekyll"
],
"summary": [
"A step-by-step guide on how I configured and hosted a secure static site using AWS."
],
"published": [
"2016-01-24T00:00:00+00:00"
],
"updated": [
"2016-10-12T13:25:00+00:00"
],
"author": [
{
"type": [
"h-card"
],
"properties": {
"name": [
"Oliver Pattison"
],
"url": [
"https://olivermak.es/about/"
]
}
}
],
"content": [
{
"value": "\n\n \n
This is a guide to getting set up quickly and cheaply to host a static website on Amazon Web Services with a TLS certificate.
\nWe live in a time of significant uncertainty about what privacy means. Our personal data and identity are closely monitored and threatened by governments, companies, and individuals who have proven that they can’t be trusted. At the least, we expect our email services, banks and shopping carts to be served entirely over secure connections. Transport Layer Security (TLS) prevents information from being altered mid-stream and is the main line of protection on the web against passwords and other sensitive information being read over open networks. TLS is one method for determining that a site is what it claims to be. I would want others to configure their websites using TLS, so why wouldn’t I do my part, even for a private website like my own? I value the principles of privacy of communication and freedom from surveillance enough that I decided my own site should be served over a secure connection.
\nServing over HTTPS with SSL or TLS used to be difficult to configure, and sometimes prohibitively expensive. Fortunately the solution for this problem isn’t nearly as much of a barrier anymore. An individual can afford to host a website cheaply, and a TLS certificate can now be added for a very low cost on top of that. Amazon Web Services (AWS) has made it straightforward enough for me to do in an afternoon.
\nAmazon Simple Storage Service (S3) is a fast and usually inexpensive way of hosting a static site. S3 typically offers better value for money compared to other budget hosting options, since the initial costs are lower with no fixed payment floor and performance can be surprisingly fast compared to shared hosting. Amazon CloudFront, a content delivery network, enhances S3’s capability by serving files based on their location as determined by network latency. Serving files as close as possible to the user can yield significant gains in performance. A bucket of hosted S3 files can hook right into CloudFront, so an S3 site can be configured to serve with CloudFront instead with only a few minutes of configuration time.
\nAWS is the so-called “cloud”. What this really means is that Amazon provides a collection of remote, abstracted servers with tools for deploying and hosting website and web services. Together, S3, CloudFront and the AWS Certificate Manager enable hosting a personal static site (using HTTP/2) for potentially only a couple of dollars per month. A TLS certificate can now be had initially for free for a CloudFront site.
\nexample.com
or olivermak.es
).administrator@
email which I set up with Hover. Whether you can get away with using your personal email depends on whether your domain has WHOIS privacy turned on.Before I get to the part about configuring a TLS certificate, I’ll cover my method for setting up S3 for hosting, CloudFront as a CDN (and a requirement for the certificate) and Route 53 for DNS routing. It’s a bit of a complicated process, but it is worth it. Getting the certificate is free and very brief for those already set up with AWS CloudFront.
\nI have been working with hosting static sites on S3 since 2013, so I already had a head start on this part. Initial setup was a bit confusing, but now that I am familiar with it, it is my preferred method for hosting Jekyll sites.
\nS3 initially requires setting up at least one “bucket” of files, which is essentially a directory that can be made into a public site. An S3 bucket is the target to deploy a static site to.
\nGo to the S3 management console and select Create bucket. Create a unique name with no spaces (dots and dashes allowed). It does not have to be the same as the domain, but it might be easier to remember if it is. Choose a geographically close region.
\nIn the Properties configuration for the bucket go to the Permissions section and then select Edit bucket policy. Setting a policy properly allows a bucket to be visible on the public web.
\n\nGo to the Static Website Hosting section and choose the option Enable website hosting. Add index.html
for the index document and 404/index.html
for the error document. While here, copy the Endpoint which looks like BUCKETNAME.s3-website-us-east-1.amazonaws.com
. Save.
I deploy my Jekyll sites with s3_website, a Ruby gem developed specifically for deploying Jekyll and other static sites. The tool sidesteps the challenge of dealing directly with the AWS API, sets up performance-friendly options and can be run from a command line interface. s3_website takes the generated _site
folder in Jekyll and publishes it to a specified S3 bucket.
Read the s3_website docs or take a look at my site’s configuration, particularly the s3_website.yml configuration file (current at time of publishing). One important warning here: make sure not to commit private AWS keys to version control. I used environmental variables in macOS’s Terminal to privately save my AWS “access key ID” and “secret access key” to keep them out of my site’s Git version control.
\ns3_website configuration could be an article on its own, so I’ll defer to their extensive documentation. In any case, it’s part of my deployment process for this site, and it can be done even without being an experienced programmer.
\nGo to the CloudFront configuration and Create Distribution (select a “Web” distribution when prompted). The Origin Domain Name should be set to the endpoint from the S3 bucket. The endpoint specified here must look like BUCKETNAME.s3-website-us-east-1.amazonaws.com
and not BUCKETNAME.s3.amazonaws.com
so avoid using the “Amazon S3 Buckets” autocomplete that AWS provides. Why this URL matters is explained on the Open Guide to Amazon Web Services).
Set Alternate Domain Names (CNAMEs) to the desired domain. This should be a single domain per distribution formatted as example.com
(without a protocol).
Leave SSL Certificate alone – or skip ahead to the final step to take care of this now. Set Default Root Object to index.html
which makes sure that the root domain https://example.com/
will redirect to an index page rather than showing a directory of files or an error message. I set Logging on and created an S3 bucket for it, but it is not essential for configuration.
Set Distribution State to “enabled” (default value). The site will not be live until Route 53 or another service points the desired domain to this Cloudfront bucket being set up.
\nIf using s3_website to handle S3 and CloudFront, read about invalidations. CloudFront invalidations cost money after the first 1,000 URLs. Read about CloudFront caching (TTL settings) to avoid issues around invalidations.
\nThis is point where I would brew a cup of tea since it will take 5-20 minutes for CloudFront to “progress”. The first couple of times I used CloudFront, I spent more time changing configuration items and waiting for effects to kick in than I actually did reading AWS documentation. CloudFront configuration takes a really long time – take time to try to get the configuration correct initially because each further change requires resetting the clock to zero (and making another cup of tea)!
\n\nSetting up Route 53 for handling routing isn’t absolutely required, but I found it helpful since it keeps all of the administration in one place. Also, Route 53 starts at a flat 50 cents per month per domain. One could configure DNS with another service, but I chose Route 53. Avoid this step for live domains and websites until some brief downtime is acceptable.
\nSign in to the Route 53 console and create a new Hosted Zone with the target domain. After it is created, copy the Name Servers and add them individually to that domain’s registrar administration for name servers. Now Route 53 handles configuration for the domain.
\nCreate an ALIAS
record for the root domain. In the hosted zone select Create Record Set. Leave Name blank to set the target URL. Type should be “A – IPv4 address”. Alias should be set to “Yes”. Alias target should be set to the CloudFront distribution URL from the distribution created in step 2 (looks like a12bcdefgh89yz.cloudfront.net.
). Save.
As with CloudFront configuration, do not expect the changes to kick in immediately. Redirecting the domain to the configured distribution takes a few minutes.
\nBefore setting up TLS, make sure all URLs on the site are working properly at their http://
location.
Edit the configuration for the CloudFront distribution set up in Step 2 in the General tab. Select Custom SSL Certificate (example.com) and then Request an ACM certificate. The request for the TLS certificate is made at no cost through AWS Certificate Manager, another console service. Alternatively, go to the ACM console and Request a certificate.
\nThere are two key steps:
\nexample.com
and *.example.com
will cover both the domain and all subdomains.To be on the safe side, choose both example.com
and *.example.com
when setting up the certificate. Since only one certificate is allowed per CloudFront distribution, this covers any subdomains needed for the same certificate.
After following the instructions in the email and approval page to validate the certificate, go back to the CloudFront distribution and select the certificate.
\nAbsolutely set Custom SSL Client Support to “Only Clients that Support Server Name Indication (SNI)”. The alternative “All Clients” costs $600 per month because it requires a dedicated IP version of custom SSL support. The downside to SNI is that older browsers (4-10 years old) may not properly support TLS and therefore will get a worse experience (no HTTPS) or no experience (if HTTPS-only is specified). To support older browsers, HTTPS-only can be turned off since it is not a requirement, but this will mean that http://example.com
won’t automatically redirect to https://example.com
.
Go to the Behaviors tab, select the only item and edit it. Set Viewer Protocol Policy to “Redirect HTTP to HTTPS”. (One can also specify this setting on initial distribution configuration but afterward it is configured in this section.)
\n\n\n\nIt’s time for another cup of tea because CloudFront will need a bit longer to process after changes are saved. After this, setup should be complete. Make sure the status of the distribution is marked as “deployed” and check whether the https://
URLs for the site work properly. Done.
www
URLs to redirectIf https://example.com
is desired instead of https://www.example.com
, Route 53 can be set up to automatically redirect these requests. Route 53 treats www
just like any other subdomain. It is possible to set up an ALIAS
-type record in the same hosted zone for the www
domain and forward it to the same CloudFront distribution (with CNAMES for both domains set), but this has the disadvantage of offering no obvious canonical URL for the site. A URL like www.example.com
would direct to the same exact resource as example.com
would – but neither would be preferred because neither is set up as canonical. Both for users and search engines, having only a single URL for each unique resource is definitely preferred.
The process is similar to Step 1 (S3) through Step 2 (CloudFront) and Step 3 (Route 53) applied to a new bucket and distribution prepended with www
. However, there are a few adjustments. The S3 bucket should be set to Redirect all requests to another host name which should be set at the root domain (example.com
). That bucket can then be connected to a second CloudFront distribution and routed with Route 53, attached to the same TLS certificate exactly as above. Setting up a parallel distribution that uses the S3 bucket’s built-in redirection service results in a single canonical URL regardless of the protocol used.
This process could be mirrored to serve www
as the canonical URL instead. For an example of this inverse behavior, check out how google.com
redirects to www.google.com
. Whether one picks www
or no www
is mostly a matter of personal preference, but the important thing is consistent behavior and a single canonical URL.
These are only my notes for configuration – there’s a reason I left out the word “you” from this account. I hope that this guide is helpful for anyone working on a similar challenge but I have not covered all of the ways that this process could go wrong. I discovered some of these methods through reading accounts of AWS configuration, some from Amazon’s official documentation, and others from trial and error.
\nIt was completely worth doing and I’d highly recommend it to anyone who is already using AWS to host a static site. Other people have followed this guide and successfully used CloudFront to host a TLS site – let me know how it goes if you try it!
\n\n", "html": "This is a guide to getting set up quickly and cheaply to host a static website on Amazon Web Services with a TLS certificate.
\nWe live in a time of significant uncertainty about what privacy means. Our personal data and identity are closely monitored and threatened by governments, companies, and individuals who have proven that they can’t be trusted. At the least, we expect our email services, banks and shopping carts to be served entirely over secure connections. Transport Layer Security (TLS) prevents information from being altered mid-stream and is the main line of protection on the web against passwords and other sensitive information being read over open networks. TLS is one method for determining that a site is what it claims to be. I would want others to configure their websites using TLS, so why wouldn’t I do my part, even for a private website like my own? I value the principles of privacy of communication and freedom from surveillance enough that I decided my own site should be served over a secure connection.
\nServing over HTTPS with SSL or TLS used to be difficult to configure, and sometimes prohibitively expensive. Fortunately the solution for this problem isn’t nearly as much of a barrier anymore. An individual can afford to host a website cheaply, and a TLS certificate can now be added for a very low cost on top of that. Amazon Web Services (AWS) has made it straightforward enough for me to do in an afternoon.
\nAmazon Simple Storage Service (S3) is a fast and usually inexpensive way of hosting a static site. S3 typically offers better value for money compared to other budget hosting options, since the initial costs are lower with no fixed payment floor and performance can be surprisingly fast compared to shared hosting. Amazon CloudFront, a content delivery network, enhances S3’s capability by serving files based on their location as determined by network latency. Serving files as close as possible to the user can yield significant gains in performance. A bucket of hosted S3 files can hook right into CloudFront, so an S3 site can be configured to serve with CloudFront instead with only a few minutes of configuration time.
\nAWS is the so-called “cloud”. What this really means is that Amazon provides a collection of remote, abstracted servers with tools for deploying and hosting website and web services. Together, S3, CloudFront and the AWS Certificate Manager enable hosting a personal static site (using HTTP/2) for potentially only a couple of dollars per month. A TLS certificate can now be had initially for free for a CloudFront site.
\nexample.com
or olivermak.es
).administrator@
email which I set up with Hover. Whether you can get away with using your personal email depends on whether your domain has WHOIS privacy turned on.Before I get to the part about configuring a TLS certificate, I’ll cover my method for setting up S3 for hosting, CloudFront as a CDN (and a requirement for the certificate) and Route 53 for DNS routing. It’s a bit of a complicated process, but it is worth it. Getting the certificate is free and very brief for those already set up with AWS CloudFront.
\nI have been working with hosting static sites on S3 since 2013, so I already had a head start on this part. Initial setup was a bit confusing, but now that I am familiar with it, it is my preferred method for hosting Jekyll sites.
\nS3 initially requires setting up at least one “bucket” of files, which is essentially a directory that can be made into a public site. An S3 bucket is the target to deploy a static site to.
\nGo to the S3 management console and select Create bucket. Create a unique name with no spaces (dots and dashes allowed). It does not have to be the same as the domain, but it might be easier to remember if it is. Choose a geographically close region.
\nIn the Properties configuration for the bucket go to the Permissions section and then select Edit bucket policy. Setting a policy properly allows a bucket to be visible on the public web.
\n\nGo to the Static Website Hosting section and choose the option Enable website hosting. Add index.html
for the index document and 404/index.html
for the error document. While here, copy the Endpoint which looks like BUCKETNAME.s3-website-us-east-1.amazonaws.com
. Save.
I deploy my Jekyll sites with s3_website, a Ruby gem developed specifically for deploying Jekyll and other static sites. The tool sidesteps the challenge of dealing directly with the AWS API, sets up performance-friendly options and can be run from a command line interface. s3_website takes the generated _site
folder in Jekyll and publishes it to a specified S3 bucket.
Read the s3_website docs or take a look at my site’s configuration, particularly the s3_website.yml configuration file (current at time of publishing). One important warning here: make sure not to commit private AWS keys to version control. I used environmental variables in macOS’s Terminal to privately save my AWS “access key ID” and “secret access key” to keep them out of my site’s Git version control.
\ns3_website configuration could be an article on its own, so I’ll defer to their extensive documentation. In any case, it’s part of my deployment process for this site, and it can be done even without being an experienced programmer.
\nGo to the CloudFront configuration and Create Distribution (select a “Web” distribution when prompted). The Origin Domain Name should be set to the endpoint from the S3 bucket. The endpoint specified here must look like BUCKETNAME.s3-website-us-east-1.amazonaws.com
and not BUCKETNAME.s3.amazonaws.com
so avoid using the “Amazon S3 Buckets” autocomplete that AWS provides. Why this URL matters is explained on the Open Guide to Amazon Web Services).
Set Alternate Domain Names (CNAMEs) to the desired domain. This should be a single domain per distribution formatted as example.com
(without a protocol).
Leave SSL Certificate alone – or skip ahead to the final step to take care of this now. Set Default Root Object to index.html
which makes sure that the root domain https://example.com/
will redirect to an index page rather than showing a directory of files or an error message. I set Logging on and created an S3 bucket for it, but it is not essential for configuration.
Set Distribution State to “enabled” (default value). The site will not be live until Route 53 or another service points the desired domain to this Cloudfront bucket being set up.
\nIf using s3_website to handle S3 and CloudFront, read about invalidations. CloudFront invalidations cost money after the first 1,000 URLs. Read about CloudFront caching (TTL settings) to avoid issues around invalidations.
\nThis is point where I would brew a cup of tea since it will take 5-20 minutes for CloudFront to “progress”. The first couple of times I used CloudFront, I spent more time changing configuration items and waiting for effects to kick in than I actually did reading AWS documentation. CloudFront configuration takes a really long time – take time to try to get the configuration correct initially because each further change requires resetting the clock to zero (and making another cup of tea)!
\n\nSetting up Route 53 for handling routing isn’t absolutely required, but I found it helpful since it keeps all of the administration in one place. Also, Route 53 starts at a flat 50 cents per month per domain. One could configure DNS with another service, but I chose Route 53. Avoid this step for live domains and websites until some brief downtime is acceptable.
\nSign in to the Route 53 console and create a new Hosted Zone with the target domain. After it is created, copy the Name Servers and add them individually to that domain’s registrar administration for name servers. Now Route 53 handles configuration for the domain.
\nCreate an ALIAS
record for the root domain. In the hosted zone select Create Record Set. Leave Name blank to set the target URL. Type should be “A – IPv4 address”. Alias should be set to “Yes”. Alias target should be set to the CloudFront distribution URL from the distribution created in step 2 (looks like a12bcdefgh89yz.cloudfront.net.
). Save.
As with CloudFront configuration, do not expect the changes to kick in immediately. Redirecting the domain to the configured distribution takes a few minutes.
\nBefore setting up TLS, make sure all URLs on the site are working properly at their http://
location.
Edit the configuration for the CloudFront distribution set up in Step 2 in the General tab. Select Custom SSL Certificate (example.com) and then Request an ACM certificate. The request for the TLS certificate is made at no cost through AWS Certificate Manager, another console service. Alternatively, go to the ACM console and Request a certificate.
\nThere are two key steps:
\nexample.com
and *.example.com
will cover both the domain and all subdomains.To be on the safe side, choose both example.com
and *.example.com
when setting up the certificate. Since only one certificate is allowed per CloudFront distribution, this covers any subdomains needed for the same certificate.
After following the instructions in the email and approval page to validate the certificate, go back to the CloudFront distribution and select the certificate.
\nAbsolutely set Custom SSL Client Support to “Only Clients that Support Server Name Indication (SNI)”. The alternative “All Clients” costs $600 per month because it requires a dedicated IP version of custom SSL support. The downside to SNI is that older browsers (4-10 years old) may not properly support TLS and therefore will get a worse experience (no HTTPS) or no experience (if HTTPS-only is specified). To support older browsers, HTTPS-only can be turned off since it is not a requirement, but this will mean that http://example.com
won’t automatically redirect to https://example.com
.
Go to the Behaviors tab, select the only item and edit it. Set Viewer Protocol Policy to “Redirect HTTP to HTTPS”. (One can also specify this setting on initial distribution configuration but afterward it is configured in this section.)
\n\n\n\nIt’s time for another cup of tea because CloudFront will need a bit longer to process after changes are saved. After this, setup should be complete. Make sure the status of the distribution is marked as “deployed” and check whether the https://
URLs for the site work properly. Done.
www
URLs to redirectIf https://example.com
is desired instead of https://www.example.com
, Route 53 can be set up to automatically redirect these requests. Route 53 treats www
just like any other subdomain. It is possible to set up an ALIAS
-type record in the same hosted zone for the www
domain and forward it to the same CloudFront distribution (with CNAMES for both domains set), but this has the disadvantage of offering no obvious canonical URL for the site. A URL like www.example.com
would direct to the same exact resource as example.com
would – but neither would be preferred because neither is set up as canonical. Both for users and search engines, having only a single URL for each unique resource is definitely preferred.
The process is similar to Step 1 (S3) through Step 2 (CloudFront) and Step 3 (Route 53) applied to a new bucket and distribution prepended with www
. However, there are a few adjustments. The S3 bucket should be set to Redirect all requests to another host name which should be set at the root domain (example.com
). That bucket can then be connected to a second CloudFront distribution and routed with Route 53, attached to the same TLS certificate exactly as above. Setting up a parallel distribution that uses the S3 bucket’s built-in redirection service results in a single canonical URL regardless of the protocol used.
This process could be mirrored to serve www
as the canonical URL instead. For an example of this inverse behavior, check out how google.com
redirects to www.google.com
. Whether one picks www
or no www
is mostly a matter of personal preference, but the important thing is consistent behavior and a single canonical URL.
These are only my notes for configuration – there’s a reason I left out the word “you” from this account. I hope that this guide is helpful for anyone working on a similar challenge but I have not covered all of the ways that this process could go wrong. I discovered some of these methods through reading accounts of AWS configuration, some from Amazon’s official documentation, and others from trial and error.
\nIt was completely worth doing and I’d highly recommend it to anyone who is already using AWS to host a static site. Other people have followed this guide and successfully used CloudFront to host a TLS site – let me know how it goes if you try it!
\n\n" } ] } } ] } }