Move into separate folder
This commit is contained in:
@@ -3,9 +3,10 @@
|
||||
Install `prometheus.service` on an instance if it is running something that
|
||||
exports custom Prometheus metrics. In particular, museum does.
|
||||
|
||||
Also install `node-exporter.service` (after installing
|
||||
[node-exporter](https://prometheus.io/docs/guides/node-exporter/) itself) if it
|
||||
is a production instance whose metrics (CPU, disk, RAM etc) we want to monitor.
|
||||
If it is an instance whose metrics (CPU, disk, RAM etc) we want to monitor, also
|
||||
install `node-exporter.service` after installing
|
||||
[node-exporter](https://prometheus.io/docs/guides/node-exporter/) itself (Note
|
||||
that our prepare-instance script already installs node-exporter) .
|
||||
|
||||
## Installing
|
||||
|
||||
@@ -14,7 +15,7 @@ remember to change the hardcoded `XX-HOSTNAME` too in addition to adding the
|
||||
`remote_write` configuration.
|
||||
|
||||
```sh
|
||||
scp -P 7426 services/prometheus/* <instance>:
|
||||
scp services/prometheus/* <instance>:
|
||||
|
||||
nano prometheus.yml
|
||||
sudo mv prometheus.yml /root/prometheus.yml
|
||||
|
||||
@@ -9,7 +9,7 @@ Replace `client.url` in the config file with the Loki URL that Promtail should
|
||||
connect to, and move the files to their expected place.
|
||||
|
||||
```sh
|
||||
scp -P 7426 services/promtail/* <instance>:
|
||||
scp services/promtail/* <instance>:
|
||||
|
||||
nano promtail.yaml
|
||||
sudo mv promtail.yaml /root/promtail.yaml
|
||||
|
||||
@@ -95,9 +95,10 @@ setup we ourselves use in production.
|
||||
> [!TIP]
|
||||
>
|
||||
> On our production servers, we wrap museum in a [systemd
|
||||
> service](scripts/museum.service). Our production machines are vanilla Ubuntu
|
||||
> images, with Docker and Promtail installed. We then plonk in this systemd
|
||||
> service, and use `systemctl start|stop|status museum` to herd it around.
|
||||
> service](scripts/deploy/museum.service). Our production machines are vanilla
|
||||
> Ubuntu images, with Docker and Promtail installed. We then plonk in this
|
||||
> systemd service, and use `systemctl start|stop|status museum` to herd it
|
||||
> around.
|
||||
|
||||
Some people new to Docker/Go/Postgres might have general questions though.
|
||||
Unfortunately, because of limited engineering bandwidth **we will currently not
|
||||
|
||||
73
server/scripts/deploy/README.md
Normal file
73
server/scripts/deploy/README.md
Normal file
@@ -0,0 +1,73 @@
|
||||
# Production Deployments
|
||||
|
||||
Museum runs using Docker + systemd on production instances, load balanced via
|
||||
Cloudflare.
|
||||
|
||||
This document outlines how we ourselves deploy museum. Note that this is very
|
||||
specific to our use case, and while this might be useful as an example, this is
|
||||
likely overkill for simple self hosted deployments.
|
||||
|
||||
## Overview
|
||||
|
||||
We use museum's Dockerfile to build images which we then run on vanilla Ubuntu
|
||||
servers (+ Docker installed). For ease of administration, we wrap Docker
|
||||
commands to start/stop/update it in a systemd service.
|
||||
|
||||
* The production machines are vanilla Ubuntu instances, with Docker and Promtail
|
||||
installed.
|
||||
|
||||
* There is a [GitHub action](../../../.github/workflows/server-release.yml) to
|
||||
build museum Docker images using its Dockerfile.
|
||||
|
||||
* We wrap the commands to start and stop containers using these images in a
|
||||
systemd service.
|
||||
|
||||
* We call this general concept of standalone Docker images that are managed
|
||||
using systemd as "services". More examples and details
|
||||
[here](../../../infra/services/README.md).
|
||||
|
||||
* So museum is a "service". You can see its systemd unit definition in
|
||||
[museum.service](museum.service)
|
||||
|
||||
* On the running instance, we use `systemctl start|stop|status museum` to manage
|
||||
it.
|
||||
|
||||
* The service automatically updates itself on each start. There's also a
|
||||
convenience [script](update-and-restart-museum.sh) that pre-downloads the
|
||||
latest image to further reduce the delay during a restart.
|
||||
|
||||
## Installation
|
||||
|
||||
To bring up an additional museum node:
|
||||
|
||||
* Prepare the instance to run our services
|
||||
|
||||
* Setup [promtail](../../../infra/services/promtail/README.md), [prometheus and node-exporter](../../../infra/services/prometheus/README.md) services
|
||||
|
||||
* Add credentials
|
||||
|
||||
sudo mkdir -p /root/museum/credentials
|
||||
sudo tee /root/museum/credentials/tls.cert
|
||||
sudo tee /root/museum/credentials/tls.key
|
||||
sudo tee /root/museum/credentials/pst-service-account.json
|
||||
sudo tee /root/museum/credentials/fcm-service-account.json
|
||||
sudo tee /root/museum/credentials.yaml
|
||||
|
||||
* Copy the service definition and restart script to the new instance. The
|
||||
restart script can remain in the ente user's home directory. Move the service
|
||||
definition to its proper place.
|
||||
|
||||
scp </path-to-museum>/scripts/museum.service <instance>:
|
||||
scp update-and-restart-museum.sh <instance>:
|
||||
|
||||
sudo mv museum.service /etc/systemd/system
|
||||
sudo systemctl daemon-reload
|
||||
|
||||
## Starting
|
||||
|
||||
SSH into the instance, and run
|
||||
|
||||
./update-and-restart-museum.sh
|
||||
|
||||
This'll ask for sudo credentials, pull the latest Docker image, restart the
|
||||
museum service and start tailing the logs (as a sanity check).
|
||||
15
server/scripts/deploy/update-and-restart-museum.sh
Executable file
15
server/scripts/deploy/update-and-restart-museum.sh
Executable file
@@ -0,0 +1,15 @@
|
||||
#!/bin/sh
|
||||
|
||||
# This script is meant to be run on the production instances.
|
||||
#
|
||||
# It will pull the latest Docker image, restart the museum process and start
|
||||
# tailing the logs.
|
||||
|
||||
set -o errexit
|
||||
|
||||
# The service file also does this, but also pre-pull here to minimize downtime.
|
||||
sudo docker pull rg.fr-par.scw.cloud/ente/museum-prod
|
||||
|
||||
sudo systemctl restart museum
|
||||
sudo systemctl status museum | more
|
||||
sudo tail -f /root/var/logs/museum.log
|
||||
Reference in New Issue
Block a user