I was excited to learn that S3 has natively supported redirects since 2012. That meant that I could use the power of the ✨ cloud ✨ to get my own link shortener for “free”–thanks to AWS’s Free Tier–without having to muck around with databases, web servers, or serverless-cloud-lambda-edge functions. Each link is just a zero-byte file in an bucket, which S3 itself magically turns into a redirect.

I wrote a script to shorten links from the terminal but you could easily make a GUI. All it does is run aws s3api put-object:

$ publish-link 'https://example.com/long/url/here/'
https://.../zZzZz

The downsides are that you need to bring your own:

But think of the advantages!

I threw in CloudFront and Route 53 just so I could get HTTPS links. Route 53 is the only thing that doesn’t fit within the free tier AFAIK–it costs $0.50/mo. It should be pretty straightforward to delete Route 53 from the configuration below though.

The setup process is easy, assuming you have a domain name and AWS account lying around (!), but there’s a bit of waiting involved. You have to wait around 15 minutes for each of the following:

Use that time to walk your dog!

I ended using 5-digit base-58 keys1 because I figured that if I live for 50 more years I definitely won’t shorten more than one link per second:

$$ \frac{\log 86400 \times 365 \times 50}{\log 58} \approx 5.2 $$

I guess if I were really worried about the Birthday Paradox I could square the total number of links, which doubles the keylength to $10.4$. If I pick $\sqrt{N}$ keys out of $N$ possibilities, the probability of at least one collision is around 50% (for the Birthday Paradox the actual number is $23 \approx 19.104 \approx \sqrt{365}$). But I’m fine with the odds as-is.

Another consideration is that short keys make it easier for someone else to iterate through all of them to find what links you’ve shortened. Either don’t shorten secret links, or use a longer keylen. Or set up an AWS WAF rule to rate-limit requests 😱.

The script I use to shorten links is called publish-link. On macOS pbcopy copies standard input to my clipboard. On Linux I’d use xclip -sel clip and on Windows I’d use clip.exe.

#!/bin/zsh

set -euo pipefail

# Replace $bucket with your own domain.
bucket="example.com"
domain="$bucket"
keylen=5

if [ $# -lt 1 ]; then
  echo "error: path required" >&2
  exit 1
fi

while true; do
  base58_alphabet="123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz"
  key=""
  for i in {1..$keylen}; do
    index=$(($RANDOM % ${#base58_alphabet}))
    key+=${base58_alphabet:$index:1}
  done
  if ! aws s3api head-object --bucket "$bucket" --key "$key" >/dev/null 2>/dev/null; then
    break
  fi
  echo "$key is already in use; happy birthday! retrying.." >&2
done

aws s3api put-object --website-redirect-location "$1" --bucket "$bucket" --key "$key" >/dev/null

url="https://$domain/$key"
echo -n "$url" | pbcopy
echo "$url"

The meat is in this Terraform file, which I place in main.tf. Caveat emptor, I’m not an expert at cloud infrastructure. I based it off of a configuration I’ve been using to host static files for a few years now that works just fine. If something looks funky it’s probably on me, not you.

# I don't think you actually need versions this new, but this is what I tested with.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.34"
    }
  }
  required_version = ">= 1.7.2"
}

# Replace link_domain with your own domain.

locals {
  link_domain   = "example.com"
}

# Replace region with the region you prefer.

provider "aws" {
  profile = "default"
  region  = "us-west-2"
}

# CloudFront needs ACM certificates to be from us-east-1.

provider "aws" {
  alias  = "us-east-1"
  region = "us-east-1"
}

# Create an S3 bucket with public reads.

resource "aws_s3_bucket" "link" {
  bucket = local.link_domain
}

resource "aws_s3_bucket_policy" "link" {
  bucket = aws_s3_bucket.link.id
  policy = data.aws_iam_policy_document.link.json
}

data "aws_iam_policy_document" "link" {
  statement {
    principals {
      type        = "AWS"
      identifiers = ["*"]
    }
    actions = [
      "s3:GetObject",
    ]
    resources = [
      "${aws_s3_bucket.link.arn}/*",
    ]
  }
}

# Set up a website endpoint, which is what implements redirects.

resource "aws_s3_bucket_website_configuration" "link" {
  bucket = local.link_domain
  # These aren't actually used, we just need a website endpoint so we get
  # redirects.
  index_document {
    suffix = "index.html"
  }
  error_document {
    key = "error.html"
  }
}

# Set up Route 53. This is very much optional, I just wanted to have AWS manage
# the SSL certificate for me.

resource "aws_route53_zone" "link" {
  name = local.link_domain
}

resource "aws_acm_certificate" "link" {
  domain_name       = local.link_domain
  validation_method = "DNS"
  provider          = aws.us-east-1
}

resource "aws_route53_record" "link_cert_validation" {
  for_each = {
    for dvo in aws_acm_certificate.link.domain_validation_options : dvo.domain_name => {
      name   = dvo.resource_record_name
      record = dvo.resource_record_value
      type   = dvo.resource_record_type
    }
  }
  allow_overwrite = true
  name            = each.value.name
  records         = [each.value.record]
  ttl             = 60
  type            = each.value.type
  zone_id         = aws_route53_zone.link.zone_id
}

# Request an SSL certificate from AWS. This can take a while.

resource "aws_acm_certificate_validation" "link" {
  certificate_arn         = aws_acm_certificate.link.arn
  validation_record_fqdns = [for record in aws_route53_record.link_cert_validation : record.fqdn]
  provider                = aws.us-east-1
}

# Set up CloudFront. S3 doesn't support HTTPS so we put CloudFront in front.

resource "aws_cloudfront_distribution" "link" {
  origin {
    origin_id   = local.link_domain
    domain_name = aws_s3_bucket_website_configuration.link.website_endpoint
    custom_origin_config {
      http_port              = 80
      origin_protocol_policy = "http-only"
      # S3 only supports HTTP, so these are irrelevant...
      https_port           = 443
      origin_ssl_protocols = ["TLSv1.2"]
    }
  }
  aliases = [local.link_domain]
  enabled = true
  default_cache_behavior {
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.link_domain
    forwarded_values {
      query_string = true
      cookies {
        forward = "none"
      }
    }
    viewer_protocol_policy = "redirect-to-https"
    min_ttl                = 0
    default_ttl            = 31536000 # 1 year
    max_ttl                = 31536000 # 1 year
  }
  restrictions {
    geo_restriction {
      restriction_type = "none"
      locations        = []
    }
  }
  viewer_certificate {
    acm_certificate_arn = aws_acm_certificate_validation.link.certificate_arn
    ssl_support_method  = "sni-only"
  }
  price_class = "PriceClass_100"
}

# Point Route 53 to CloudFront.

resource "aws_route53_record" "link" {
  zone_id = aws_route53_zone.link.zone_id
  name    = local.link_domain
  type    = "A"
  alias {
    name                   = aws_cloudfront_distribution.link.domain_name
    zone_id                = aws_cloudfront_distribution.link.hosted_zone_id
    evaluate_target_health = false
  }
}

To set things up, run:

$ terraform init
$ terraform apply

When you see:

aws_acm_certificate_validation.link: Still creating...

… go to “Hosted zones” in the Route 53 console–make sure to select the right AWS region–then take the nameservers from the NS record and plug them into your domain registrar.

Now you’re all set to create your own short links!

$ publish-link 'https://www.jonathanychan.com/blog/link-shortener/'
https://example.com/zZzZz
1

I used base-58 because I use shortened links when hand-writing notes. Base58 omits characters that look similar, so e.g. 0 and O are absent and only o is present.