Config and scripts for deploying Montagu.
The basic idea here is that all configuration for the various components of montagu will end up here, separated by machine at present. This leads to a little duplication between machines, which is not ideal, but we can revisit that later.
The components currently described are:
montagu.yml
: Deployment of the montagu core (api, admin and contribution portal, db, etc)packit.yml
: Deployment of the reporting portal (packit and its runners)privateer.json
: Backup and restore of the orderly volume, restoration of the databasediagnostic-reports.yml
: Automatic diagnostic reports (autogenerated)
Historically relevant information will be found in:
montagu-orderly-web
; old configuration for orderly web. This will be present for a while at leastmontagu
montagu-db-backup
starport
and other places that we'll collect here.
We are aiming to progressively streamline this process.
This document is a bit long and rambly while we have OrderlyWeb deployed and are in the middle of migration, but will be thinned down once the migration is complete.
All commands below here are run from the montagu-config/
directory within the machine you are working with, after ssh
-ing into the appropriate machine
Diagnostic report config is committed to this repo, but if you need to change it, it can be re-generated by running scripts/generate_real_diagnostic_reports_config.py
.
This will generate a new yaml file "diagnostic-reports.yml" in the current working directory, which can then be copied into place in the relevant instance config directory.
We require some python packages: montagu-deploy
, packit-deploy
and privateer
. All are available on PyPI and can be installed with pip3 install --user package-name
. We'll set up a pyproject.toml
or requirements.txt
for this repo at some point in the future which will streamline things.
Be aware that attempting installing with pip
is not always sufficient for it to actually install anything, as it may decide that the old version you have is fine and then not actually do anything. You can resolve this by specifying a version (pip3 install --user montagu-deploy==0.1.2
), uninstalling first pip3 uninstall --user montagu-deploy
) or by passing --force
(though this reinstalls everything).
You can find out what versions of things you have by running
montagu-deploy --version
packit --version
privateer --version
If you are developing the deploy tools, you might like to run hatch build
on the source tree and then copy the resulting .whl
file to the machine you are working on. You can then install that package:
pip3 install --user packit_deploy-0.0.11-py3-none-any.whl
Again, watch out to see if pip
actually installs this, and be particularly careful if you have not changed the version number.
The orderly-web-deploy tool is not currently updated on PyPI, so install that from source. We will be removing it from the deployment soonish and it is not configured in this repository.
We've removed the old orderly-to-packit migration, so this needs to be run manually.
On production, I have run
docker pull mrcide/outpack.orderly:main
docker run -it --rm --name outpack-migrate \
-v montagu_orderly_volume:/orderly:ro \
-v montagu_outpack_volume:/outpack \
mrcide/outpack.orderly:main \
/orderly /outpack --once
which took about 2 hours from scratch. This is now the source of truth, and is backed up via privateer.
Continuous migrations are not currently running.
On a first deployment (after bringing down all containers), the order matters. You need to bring up packit
(and OrderlyWeb
if you are using that) before montagu
, otherwise the proxy will fail to start.
If you get a gateway error causing packit login to fail, redeploy montagu (or have a go just restarting the proxy container). This can happen if you redeploy packit without subsequently restarting montagu.
Assuming uat
here:
Start OrderlyWeb from montagu-orderly-web/
with:
./start
packit start --pull uat
montagu start --pull uat
Replace uat
with science
or production
on those machines.
See https://github.com/vimc/montagu-deploy for more details on the deploy tool.
After deploying montagu you will need to update the data vis tool by running
./scripts/copy-vis-tool
Redeploy packit (e.g., after making a change); stop and start the containers using packit-deploy
. You probably want the --kill
argument to swiftly but rudely bring down containers and the --pull
argument to make sure that you get the most recent copy of containers to deploy.
packit stop --kill uat
packit start --pull uat
Be sure to get the machine name correct (uat
, science
or production
).
If you want to test a branch, you will first need to create a PR in packit so that it builds images for your branch. Then, after that image is pushed, you should return to the relevant machine (most likely you'll be doing this on uat
) and edit the appropriate tag:
field(s) within uat/packit.yml
. You can do this with a local change on the machine, e.g. with vim
or nano
or by making a branch in montagu-config
, depending on the complexity of the changes.
If automatic certificates are enabled, you should run the renew-certificate
the first time you deploy montagu to get the initial certificate.
montagu renew-certificate <path>
This command will need to be run periodically to ensure the certificate stays up-to-date. This is done by installing a systemd timer, using the provided install-timer.sh
script.
./scripts/install-timer.sh <path>
When migrating over from orderly-web
you may need to bring down an older version of OrderlyWeb (e.g., by running ./stop
from within montagu-orderly-web
. However, the serialised configuration is incompatible with newer versions of orderly-web-deploy
and constellation
. The safest fix is to create a virtual environment and work there:
cd montagu-orderly-web
python3 -m venv old-ow
. ./old-ow/bin/activate
pip3 install constellation==1.2.4 orderly-web==1.0.0
./stop
deactivate
Once brought down with the old version, you can then use the globally installed version of orderly-web
.
See backup.md
for details on this process. See rebuild.md
for an account of rebuilding the systems in 2025.
Typically this is done on uat
only, but occasionally it will be needed on science
. Avoid testing new features on production
as that is externally visible, and may be in use by an external partner.
For the component under test, edit the appropriate file in uat/
, e.g., uat/packit.yml
. Each component has a tag
field, which can be used to target an in-development branch building on CI (you may need to have made a PR to trigger these builds). For packit
, be sure to edit the tag for both api
and app
if these are both required. You can make these edits live on the machine in question (vi
and nano
are both installed), and then deploy above with
packit stop uat
packit start --pull uat
You will need your GitHub access token for this process. Using stop --kill
will make things a bit faster to shut down, with potentially more risk of corrupting data, but we've not seen any evidence of a downside.