I'm Netlify-Free!

·

Every time someone asked on HackerNews/Reddit “what’s your blog setup”, I would reply “Well, I use Astro hosted on Netlify, but I plan to move to a VPS”. It has been going for more than a year. More important things would be tackled first, and I kept promising the internet that I will move to my own VPS, eventually. Well, I’m glad to say that I finally did! Both this blog, and my personal website are not hosted on a VPS that I rent. I still use Astro, and they are still statically generated, but I’m no longer in the hands of Netlify.

What difference does it make?

The first question you might ask yourself: “What difference does it make?” And it’s true, there is no specific reason to move away from Netlify. It worked for me for a couple of years, and Netlify is a great tool. But, like any tool that offers convenience, it comes at the expense of price.

Sure, I never paid for Netlify since I was on their free tier. But it’s a matter of one viral post until you hit the free tier limits (and I was close). And then, it’s $9/mo, a bit more than 2 times the price of a VPS in Hetzner (not affiliated, just love their products). I already have a couple of Hetzner servers, so I figured, why not?

On top of that, “just for fun” as always a good excuse. Netlify trades configuration for convenience. Deploying Astro with Netlify is basically one-click. But the moment you need another step in you CI/CD pipeline, or want to customize something, you are basically out of luck.

So buckle up, and I will tell you how you too can free yourself from proprietary hosting providers.

How to do it?

You need 2 components:

  • A web server / reverse proxy - I use Caddy because Caddy is the GOAT, with minimal config file, and SSL out of the box
  • A CI/CD pipeline / runner - I use GitLab CI/CD, with a runner hosted on my hardware

I run Caddy on bare metal, because reasons, but you can containerize it. In my /etc/caddy/Caddyfile, which is the main caddy configuration file, I added this line import /etc/caddy/sites/*.caddy, so that every file I put in /etc/caddy/sites/ will be included as part of caddy configuration. This allows me to run both blogs, without messing with editing one config file, so everything is separate and nice. KISS.

Then comes the Caddyfile for each blog. Very simple:

yieldcode.blog {
    root * /var/www/yieldcode.blog/current

    header {
        /assets/* Cache-Control "public, max-age=31536000, immutable"
        /_astro/* Cache-Control "public, max-age=31536000, immutable"
        * Cache-Control "public, max-age=3600"
    }

    handle_errors {
        rewrite * /404.html
        file_server
    }

    file_server {
        precompressed br gzip
    }
}

www.yieldcode.blog {
    redir https://yieldcode.blog{uri}
}

Astro builds a new set of assets for each build, so it’s safe to cache /assets/* and /_astro/* forever. As for the rest, I cache everything for 1 hour. Error paths will be forwarded to 404.html, and everything is served precompressed with brotli or gzip.

Finally, we need a gitlab-ci.yml that describes our pipeline:

variables:
  NODE_VERSION: "24-alpine"
  SITE_NAME: "yieldcode.blog"
  DEPLOY_PATH: "/var/www/${SITE_NAME}"
  DEPLOY_DIR: "${DEPLOY_PATH}/${CI_COMMIT_SHORT_SHA}"
  COMMIT_REF: "${CI_COMMIT_SHORT_SHA}"

stages:
  - build
  - deploy
  - notify

.node:
  image: node:$NODE_VERSION
  cache:
    key: $CI_COMMIT_REF_SLUG
    paths:
      - .npm
  before_script:
    - npm ci --cache .npm --prefer-offline

.ssh:
  image: alpine:latest
  before_script:
    - apk add --no-cache openssh-client rsync
    - eval $(ssh-agent -s)
    - install -m 600 -D /dev/null ~/.ssh/id_rsa
    - echo "$SSH_PRIVATE_KEY" | base64 -d > ~/.ssh/id_rsa
    - ssh-keyscan -H $SSH_REMOTE >> ~/.ssh/known_hosts
  after_script:
    - rm -rf ~/.ssh

build:
  stage: build
  extends: .node
  tags:
    - custom
  script:
    - npm run build
  artifacts:
    paths:
      - dist/
    expire_in: 1 hour
  only:
    - master

www:
  stage: deploy
  extends: .ssh
  dependencies:
    - build
  script:
    - ssh ${SSH_REMOTE} "mkdir -p ${DEPLOY_DIR}"
    - rsync -avz --delete dist/ ${SSH_REMOTE}:${DEPLOY_DIR}/
    - ssh ${SSH_REMOTE} "ln -sfn ${DEPLOY_DIR} ${DEPLOY_PATH}/current"

    - |
      ssh ${SSH_REMOTE} "
        cd ${DEPLOY_PATH}
        ls -td [0-9a-f]* | grep -v "current" | tail -n +4 | xargs rm -rf
      "
  only:
    - master
  environment:
    name: production
    url: https://yieldcode.blog

caddy:
  stage: deploy
  extends: .ssh
  script:
    - rsync -avz Caddyfile ${SSH_REMOTE}:/etc/caddy/sites/${SITE_NAME}.caddy
    - ssh ${SSH_REMOTE} "caddy reload --config /etc/caddy/Caddyfile"
  rules:
    - if: $CI_COMMIT_BRANCH == "master"
      changes:
        - Caddyfile

on_success:
  stage: notify
  image: alpine:latest
  when: on_success
  before_script:
    - apk add curl
  script:
    - |
      curl -s -F "token=${PUSHOVER_API_TOKEN}" \
              -F "user=${PUSHOVER_USER_TOKEN}" \
              -F "message=Pipeline succeeded for $CI_PROJECT_NAME on branch $CI_COMMIT_BRANCH@$CI_COMMIT_SHORT_SHA"\
              https://api.pushover.net/1/messages.json
on_failure:
  stage: notify
  image: alpine:latest
  when: on_failure
  before_script:
    - apk add curl
  script:
    - |
      curl -s -F "token=${PUSHOVER_API_TOKEN}" \
              -F "user=${PUSHOVER_USER_TOKEN}" \
              -F "message=Pipeline failed for $CI_PROJECT_NAME on branch $CI_COMMIT_BRANCH@$CI_COMMIT_SHORT_SHA"\
              https://api.pushover.net/1/messages.json

First, I define all the needed variables. Then, I have 3 stages:

  • build - builds the Astro code
  • deploy - copy the Astro artifacts to the server, as well as the Caddyfile (if it was changed)
  • notify - send a pushover notification to my phone that the deploy is done

Then, I define 2 common blocks: .node for NodeJS stages, and .ssh for stages that need SSH. build builds the Astro code and stores the artifact. www copies the artifacts from build stage over ssh to the remote server. Here I also do an interesting trick (which is very old, and the first time I learned about it is years ago when I played with Capistrano). To avoid corrupted state, each deploy copies the files to /var/www/yieldcode.blog/${CI_COMMIT_SHORT_SHA}, where $CI_COMMIT_SHORT_SHA is the first 8 characters of the git commit. Once the files are copied, I use a soft-link named current that points to this directory. Caddy points to the current directory, which always points to the lats commit. Finally, I do some magic to keep the last 3 directories (in case I want to rollback or something like that). This is how the directory tree looks:

 yieldcode.blog
 ├── 0c69eaa0
 ├── 4cdc8c1f
 ├── 6ddf000d
 └── current -> /var/www/yieldcode.blog/0c69eaa0

Hence, switching between versions is a simple matter of changing where current points to.

The caddy stage simply copies the Caddyfile to /etc/caddy/sites/${SITE_NAME}.caddy, and reload the caddy server to make sure it picks up the new configuration.

Lastly, I have two stages, one that runs on success and one that runs on failure, that send me a pushover notification when the deploy is done. Sure, I could have used one stage, that executes a script, which looks at some GitLab defined variable that tells me the state of the pipeline, and based on that constructs a message, but meh. It works like this as well.

That’s it. That’s Netlify (more or less).

Now sure, Netlify has more things like CDN, automatic forms, and serverless functions. But I don’t use any of that. Maybe if you do, then Netlify is for you. I never had problems with them, so I guess they are cool.

But in recent months I feel like the fun is being sucked out of being a software engineer, so doing small things like this, brings me a bit of joy in an otherwise, vast greyness of AI slopification.

Share this:

Published by

Dmitry Kudryavtsev

Dmitry Kudryavtsev

Engineering Leadership, Senior Software Engineer / Tech Entrepreneur

With more than 14 years of professional experience in tech, Dmitry is a generalist software engineer with a strong passion to writing code and writing about code.


Technical Writing for Software Engineers - Book Cover

Recently, I released a new book called Technical Writing for Software Engineers - A Handbook. It’s a short handbook about how to improve your technical writing.

The book contains my experience and mistakes I made, together with examples of different technical documents you will have to write during your career. If you believe it might help you, consider purchasing it to support my work and this blog.

Get it on Gumroad or Leanpub


From Applicant to Employee - Book Cover

Were you affected by the recent lay-offs in tech? Are you looking for a new workplace? Do you want to get into tech?

Consider getting my and my wife’s recent book From Applicant to Employee - Your blueprint for landing a job in tech. It contains our combined knowledge on the interviewing process in small, and big tech companies. Together with tips and tricks on how to prepare for your interview, befriend your recruiter, and find a good match between you and potential employer.

Get it on Gumroad or LeanPub