In this article I assume that you host your blog (or website) on your own server. I will show you how to automatically update your site just by running git push deploy
.
The core idea is to use git hooks
to trigger a site rebuild when a new commit has been pushed. But before configuring our hooks, let's see how we will build our site.
Don't be scared about the article's length, half of it are thoughts about alternative ways to do the same thing
This site is built with Gridsome, to generate the static files locally I use the command gridsome build
. We would like to do the same directly on our server. The first idea that comes to mind is to install Gridsome on our server but this set up has multiple caveats in my opinion:
With Docker we can reuse our container across environments to produce our build anywhere. Let's build our Dockerfile.
Our objective is to produce an image named gridsome-build
with gridsome and our project dependencies already installed. With this image we could build our blog like this:
docker run --rm -v "<path/to/app>:/home/node/app" gridsome-build
As you can tell, in my use case (a blog) my npm packages very rarely change. So it makes sense for me to have an image with the dependencies already installed to save that time.
We will place our Dockerfile at the root of our project. Our Dockerfile will be based on the official node container:
FROM node:12-alpine
# Install build tools
# Needed by npm install
RUN apk update && apk upgrade
RUN apk --no-cache add --virtual util-linux native-deps git\
g++ gcc libgcc libstdc++ linux-headers make python
# Manually change npm's default directory
# to avoid permission errors
# https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
# Install Gridsome globally
USER node
RUN npm i -g gridsome
# Install the application
COPY --chown=node:node ./ /home/node/build/
WORKDIR /home/node/build
USER node
RUN npm cache clean --force
RUN npm clean-install
# Remove the project files
# but keep the node modules
RUN cd .. && \
mv build/node_modules ./ && \
rm -rf build && \
mkdir build && \
mv node_modules build/
WORKDIR /home/node
# Get the source code without node_modules
# Then build the site
CMD cp -r app temp && \
rm -rf temp/node_modules && \
cp -r temp/* build/ && \
cd build && \
~/.npm-global/bin/gridsome build
Our npm package can be platform dependent. The packages we install on Ubuntu might be slightly different from the ones we install on Alpine Linux. This is why the Docker image deletes the node_modules
from the mounted source code in favor of the modules installed when it was built.
We create our image and name it gridsome-build
with:
docker build . -t gridsome-build
To build our project we need to pass the project files to the container. To mount the current directory we use the -v
option:
docker run -v $(pwd):/home/node/app/ --name blog_build gridsome-build
The container is then created with the name blog_build and our site is generated. To retrieve our static files from the container we can use the command:
docker cp blog_build:/home/node/build/dist ./dist
We can then serve our site to verify that our build worked.
Use serve -d dist
to quickly serve a site. Install serve on Ubuntu with sudo snap install serve
.
Now that we managed to build our Gridsome project with Docker, it is time to automate the process with Git.
We would like to regenerate the site each time our blog is updated. To do so we set up a Git repository on our server and link it to our local development environment.
On our server we initialize an empty git repository.
git init --bare blog.git
Then in our project we add a remote pointing to our server, we name our remote deploy:
git remote add deploy username@ourserver.com:~/blog.git
# Push the current branch and set the remote as upstream
git push --set-upstream deploy master
# Verify that our remote is set
git remote -v
We are now able to push our commits to our server with git push deploy
.
The project's code is on the server and we have a working Dockerfile. It's time to build the Docker image on our server. Since we have a bare repository, we need to clone it to access the files. The rest is the same as stated previously.
git clone ~/blog.git ~/blog
cd blog
docker build . -t gridsome-build
Of course you will need to install Docker on your server first.
The steps required to build our site are listed below:
/var/www/myblog.com/
.We create a script implementing the algorithm:
GIT_REPO="${HOME}/Git/blog.fr.git"
TMP_CLONE="${HOME}/tmp/blog.fr"
PUBLIC_WWW="/var/www"
PUBLIC_DIST="${PUBLIC_WWW}/dist"
PUBLIC_BLOG="${PUBLIC_WWW}/blog.fr"
# Remove the temporary directories
if [ -d "$TMP_CLONE" ]; then
echo "Removing existing directory ${TMP_CLONE}"
rm -rf $TMP_CLONE
fi
# Clone the project
git clone $GIT_REPO $TMP_CLONE
# Generate the static site
echo "Generating the static site"
docker run --name blog_build -v "${TMP_CLONE}:/home/node/app/" gridsome-build
# Publish the site and give the rights to www-data
echo "Publishing the site"
docker cp blog_build:/home/node/build/dist $PUBLIC_WWW
rm -rf "${PUBLIC_BLOG}"
mv "${PUBLIC_DIST}" $PUBLIC_BLOG
chown -R www-data:www-data $PUBLIC_BLOG
# Tidy up
docker container rm blog_build
rm -rf $TMP_CLONE
We can make our script executable with chmod +x build-project.sh
.
We run it on our server to make sure everything works properly.
If your user is not part of the docker group you won't be able to use docker commands without sudo
. See the section about Docker commands permissions below to learn how to circumvent this issue.
At this stage, we have a server capable of generating our static site simply by running a script. We need this script to run each time we update the server's Git repository. To do so we use the Git post-receive
hook which runs when git push
is used and after the code has been pushed.
There are other hooks like post-commit, pre-push, and so on... Check out Githooks.com to learn more about it.
To program our hook we just need to create a script named like the hook in .git/hooks
. Here we just need to copy our script:
cp build-project.sh ~/blog.git/hooks/post-receive
And this is it, everything is now set up to automatically generate your website when you run git push deploy
on your local computer. Try it yourself, update a file and see how the site is generated.
If your user is not part of the docker group your won't be able to use docker commands without sudo
. To solve this issue quickly create the docker group with sudo groupadd docker
and add your user to the docker group with sudo usermod -aG docker $USER
.
However you should know that the docker group grants privileges equivalent to the root user. If you don't want to add your user to the docker group you need to whitelist some commands in the sudoers
file.
First we need to identify which commands need run with sudo in our script and prefix them with sudo
.
We also need to change command aliases to their full path. For instance instead of using docker run ...
we will use /usr/bin/docker run ...
.
Use which docker
to see where the docker binary is located.
The final script should look like this:
GIT_REPO="${HOME}/Git/blog.fr.git"
TMP_CLONE="${HOME}/tmp/blog.fr"
PUBLIC_WWW="/var/www"
PUBLIC_DIST="${PUBLIC_WWW}/dist"
PUBLIC_BLOG="${PUBLIC_WWW}/blog.fr"
# Remove the temporary directories
if [ -d "$TMP_CLONE" ]; then
echo "Removing existing directory ${TMP_CLONE}"
rm -rf $TMP_CLONE
fi
# Clone the project
git clone $GIT_REPO $TMP_CLONE
# Generate the static site
echo "Generating the static site"
sudo /usr/bin/docker run --name blog_build -v "${TMP_CLONE}:/home/node/app/" gridsome-build
# Publish the site and give the rights to www-data
echo "Publishing the site"
sudo /usr/bin/docker cp blog_build:/home/node/build/dist $PUBLIC_WWW
sudo /bin/rm -rf "${PUBLIC_BLOG}"
sudo /bin/mv "${PUBLIC_DIST}" $PUBLIC_BLOG
sudo /bin/chown -R www-data:www-data $PUBLIC_BLOG
# Tidy up
sudo /usr/bin/docker container rm blog_build
sudo /bin/rm -rf $TMP_CLONE
Our user needs to be able to run these commands without triggering a sudo prompt. To do so we need to add some rules in the sudoers file. We edit the sudoers file with sudo visudo
.
We need to add every command that was prefixed with sudo
to the sudoers file.
The commands MUST to be the exact same ones we used in our post-receive hook.
And by that I mean that /bin/rm /tmp
is different from /bin/rm /tmp/
. You have been warned.
Use sudo select-editor
to choose your editor
The lines to add to the sudoers file are listed below:
# Blog publishing
sammy ALL=NOPASSWD: /usr/bin/docker run --name blog_build -v /home/sammy/tmp/blog.fr\:/home/node/app/ gridsome-build
sammy ALL=NOPASSWD: /usr/bin/docker cp blog_build\:/home/node/build/dist /var/www
sammy ALL=NOPASSWD: /bin/rm -rf /var/www/blog.fr
sammy ALL=NOPASSWD: /bin/mv /var/www/dist /var/www/blog.fr
sammy ALL=NOPASSWD: /bin/chown -R www-data\:www-data /var/www/blog.fr
sammy ALL=NOPASSWD: /usr/bin/docker container rm blog_build
sammy ALL=NOPASSWD: /bin/rm -rf /home/sammy/tmp/blog.fr
In my case, the image I generate is a little bit over 1GB. That is huge. To minify my Docker image I will use multi-stage builds. There is also a tool named docker-slim, created to reduce Docker images but I didn't manage to make it work for my purposes.
With multi-stage builds we can use different containers when building our image. For instance we can use node:12-stretch
when we install our dependencies, then switch to node:12-slim
and copy the necessary files to the new container. The Dockerfile would look like this:
FROM node:12-stretch as builder
# Install dependencies, build the project, ...
FROM node:12-slim
# Copy the compiled dependencies
WORKDIR /home/node/
COPY --from=builder /home/node/app/node_modules app/node_modules
COPY --from=builder /usr/bin/lscpu /usr/bin/lscpu
# And so on
In our case the modified Dockerfile looks like this:
FROM node:12-alpine AS builder
# Install build tools
RUN apk update && apk upgrade
RUN apk --no-cache add --virtual native-deps git\
g++ gcc libgcc libstdc++ linux-headers make python
# Install Gridsome globally
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
USER node
RUN npm i -g gridsome
# Install the application
COPY --chown=node:node ./ /home/node/build/
WORKDIR /home/node/build
USER node
RUN npm cache clean --force
RUN npm clean-install
FROM node:12-alpine
# Remove the project files
# but keep the node modules
WORKDIR /home/node
USER node
RUN mkdir build .npm-global
COPY --from=builder /home/node/build/node_modules build/node_modules
COPY --from=builder /home/node/.npm-global .npm-global
# Get the source code without node_modules
# Then build the site
CMD cp -r app temp && \
rm -rf temp/node_modules && \
cp -r temp/* build/ && \
cd build && \
~/.npm-global/bin/gridsome build
With this simple step I slimmed down my image from 1GB to 500MB. Considering that the node_modules
folder itself is around 320MB without taking Gridsome into account, and the base image is around 40MB compressed, I'd say this is pretty optimized. I guess we could probably do better but for now it will be enough.
This setup is tailored to my needs, but chances are yours are very different. In this section I explore alternative ways of achieving the same goals.
In our setup we are using git hooks to watch a repository directly on our server. But you might want to watch a repository on Github or Gitlab. It turns out that Github provides git hooks called webhooks, and so does Gitlab.
Here is the definition of a Webhook by Gitlab:
Webhooks are “user-defined HTTP callbacks". They are usually triggered by some event, such as pushing code to a repository or a comment being posted to a blog. When that event occurs, the source app makes an HTTP request to the URI configured for the webhook. The action taken may be anything. Common uses are to trigger builds with continuous integration systems or to notify bug tracking systems
If you need to regularly update your dependencies, this setup might not be adapted to your needs. In this case it might be best to generate a Docker image with only Gridsome installed. Then when running the container you'll run npm install
before building the site.
I haven't tested it but the Dockerfile could look like this:
FROM node:12-alpine
# Install build tools
# Needed by npm install
RUN apk update && apk upgrade
RUN apk --no-cache add --virtual native-deps git\
g++ gcc libgcc libstdc++ linux-headers make python
# Install Gridsome globally
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
USER node
RUN npm i -g gridsome
# We expect the app to be in /home/node/app
WORKDIR /home/node/app
USER node
# Build
CMD npm cache clean --force && \
npm clean-install && \
~/.npm-global/bin/gridsome build
I don't recommend this setup if you push regularly, you'll be downloading the dependencies every single time. As another alternative, you could use the setup described in this article but add a cronjob that regenerates the Docker image regularly (every 2 weeks for instance).
I like having my Git repository on my server. It acts like a backup somehow. And if I need to retrieve my project I can do so from my server directly.
But maybe you don't care about having a Git repo on your server. In this case the whole process is overkill and you should just push the generated static site to your server. To do so you could simply do a scp dist/* yourserver.com:/var/www/blog.fr/
. If you want to automate this you could use the post-commit
hook.
To build our project we needed to pass the project files to the container while excluding the folder node_modules
. We might get errors like 'linux-x64' binaries cannot be used on the 'linuxmusl-x64' platform when using modules built on another platform.
In this alternative setup we mount the current directory but exlude node_modules
directly with the run
command:
docker run \
-v $(pwd):/home/node/app/ \
-v node_modules:/home/node/app/node_modules \
gridsome-build
We could have used this hack in our setup: instead of doing a copy of the source code and then deleting node_modules
from the copy we could have just used the following command in conjunction with the mounting exclusion:
# Get the source code
# Then build the site
CMD cp -r app build && \
cd build && \
~/.npm-global/bin/gridsome build
But I prefer having a simple run
command.
If we want to go even further, with this option we don't have to use docker cp
to extract the generated site. We don't even need a distinction between ~/app
that receives the source code and ~/build
that holds the node modules. So our Dockerfile will be different (especially the CMD
part).
# [...] the beginning stays the same
# Install the application
COPY --chown=node:node ./ /home/node/app/
# We expect the app to be in /home/node/app
WORKDIR /home/node/app
USER node
RUN npm cache clean --force
RUN npm clean-install
# Remove the project files
# but keep the node modules
RUN cd .. && \
mv app/node_modules ./ && \
rm -rf app && \
mkdir app && \
mv node_modules app/
# Build
WORKDIR /home/node
CMD ~/.npm-global/bin/gridsome build
Then we can use the same command with --rm
since we don't need to keep the container around to extract the files.
docker run --rm \
-v $(pwd):/home/node/app/ \
-v node_modules:/home/node/app/node_modules \
gridsome-build
The generated files will then be in the dist/
folder of your project.
I didn't use this setup because in some cases it yielded mkdir : Permission denied
errors when the container tried to create src/.temp
or dist/assets
. Nonetheless, I am sure that with some debugging, it could work properly.
A final note, if you ever need to log your post-receive hook, you can add the following code at the beginning of the file:
LOG_FILE=/tmp/postreceive.log
# Close STDOUT file descriptor
exec 1<&-
# Close STDERR FD
exec 2<&-
# Open STDOUT as $LOG_FILE file for read and write.
exec 1<>$LOG_FILE
# Redirect STDERR to STDOUT
exec 2>&1
echo "This line will appear in $LOG_FILE, not 'on screen'"
This article was rather long, and maybe I have scared some readers into using hosting services such as Netlify or Github pages. We saw how I set up my website continuous deployment with Docker and Git, but this process can be applied to any kind a site. I actually did it multiple times to automatically publish some Django sites. I think that this is viable for home projects or prototypes. For anything bigger consider using Jenkins or any other CI/CD tool.