Docker
To serve our app we need to setup the server by installing all the dependencies it requires to run (e.g. Python), then we need to copy over the latest code and finally run it. This combination is something that Docker makes easy to bundle together into a container image.
A docker image can be built using a Dockerfile, which I find to be very expressive and clear. We can use a multi-stage Dockerfile to first build the frontend assets and then to create the server.
Note
Previously I've used scripts to install all the dependencies on a computer and then copied the code over (e.g. via rsync). Docker is an improvement on this as it allows this entire process to be managed in a single concise Dockerfile.
Frontend stage
The frontend stage is used to build the frontend assets i.e. bundle together the Javascript and css. It is separated from the production image as none of the dependencies required to build the frontend are required in production. Hence splitting the stages saves image size, and reduces the code that is susceptible to attack.
As Docker images are built as a series of layers in the order given in
the Dockerfile it is best to put layers that rarely change before
those that change often. Hence npm install
before the code is copied
into the image.
The following should be added to the Dockerfile
file in the root of
our repository. Firstly as we are using node 15 we should create the
image from a node 15 image,
FROM node:15-alpine as frontend
we should then install all the dependencies to a /frontend/
folder,
WORKDIR /frontend/
COPY frontend/package.json frontend/package-lock.json /frontend/
RUN npm install
finally we can copy over our code and build the frontend,
COPY frontend /frontend/
RUN npm run build
Production image
The production image must include everything required to serve the app in production. In our case this means it must run the backend, and have the frontend assets copied to the correct locations (see the serving blueprint).
The following should be added to the Dockerfile
file following the
frontend stage. Firstly as we are using Python 3.9 we should create the
image from a Python 3.9 image,
FROM python:3.9-alpine
then we can copy over the hypercorn settings, instruct Docker to run the start script in the image (when the container starts),
WORKDIR /app
COPY start hypercorn.toml /app/
ENTRYPOINT ["dumb-init"]
CMD ["./start"]
Note
We'll utilise dumb-init as our init system to ensure signals are correctly handled and processes are properly exited.
next we can install the required system dependencies (brew does this locally) and setup a Python virtual environment,
RUN apk --no-cache add alpine-sdk build-base cargo gcc git libffi-dev \
musl-dev openssl openssl-dev
RUN python -m venv /ve
ENV PATH=/ve/bin:${PATH}
and then the specific Python dependencies our project uses,
RUN pip install --no-cache-dir dumb-init poetry
RUN mkdir -p /app/db /app/backend/static /app/backend/templates /root/.config/pypoetry
COPY backend/poetry.lock backend/pyproject.toml /app/
RUN poetry config virtualenvs.create false \
&& poetry install --no-root \
&& poetry cache clear pypi --all --no-interaction
followed by copying over the frontend bundle, as built in the frontend stage above,
COPY --from=frontend /frontend/build/index.html /app/backend/templates/
COPY --from=frontend /frontend/build/static/. /app/backend/static/
then our backend code (including the relevant database migrations),
COPY backend/db/. /app/db/
COPY backend/src/ /app/
and finally switching user to nobody
so that our code doesn't run as
root in the contianer,
USER nobody