To setup a new Tor public bridge (i.e. with the hostname my_bridge), do
-
clone this repo
git clone https://github.com/toralf/tor-relays.git cd ./tor-relays
-
run
bash ./bin/base.sh ansible-playbook playbooks/ca.yaml -e @secrets/local.yaml --tags ca
to create seeds, local dirs ~/tmp and ./secrets and a self-signed Root CA
-
add your bridge to the Ansible group tor:
--- tor: hosts: my_bridge:
Take a look into examples for an Ansible inventory using the Hetzner cloud.
-
deploy it
./site.yaml --limit my_bridge
-
inspect it:
grep "my_bridge" ~/tmp/tor-relays/* ls ~/tmp/tor-relays/**/my_bridge*
-
enjoy it
The deployment is made by Ansible.
The Ansible role expects a seed_address
value to change the ipv6 address at a Hetzner system
to a reliable randomized one (at IONOS a proposed one is displayed, but not set).
For Tor relays the DDoS solution of torutils used.
For Snowflake and NGinx instances a lightweight version of that ruleset is deployed.
To deploy additional software, configure it (i.e. for a Quassel server) like:
hosts:
my_system:
additional_ports:
- "4242"
additional_software:
- "quassel-core"
The default branch is defined by the variable <...>_git_version. The variable <...>_patches might contain list of URIs to apply additional patches on the fly.
If a Prometheus server is configured (prometheus_server
) then the inbound traffic from its ip to the
local metrics port is passed by a firewall allow rule (code).
The metrics port is pseudo-randomly choosen using seed_metrics.
Nginx is used to encrypt the data on transit (code)
using the certificate of the self-signed Root CA (code).
The Root CA key has to be put into the Prometheus config to enable scraping metrics via TLS.
snowflake:
vars:
metrics_port: "{{ range(16000,60999) | random(seed=seed_metrics + inventory_hostname + ansible_facts.default_ipv4.address + ansible_facts.default_ipv6.address) }}"
snowflake_metrics: true
prometheus_server: "1.2.3.4
A Prometheus node exporter is deployed if node_metrics: true
is set.
For Prometheus config examples and Grafana dashboards take a look at this repository.
A static prometheus config could look like this:
- job_name: "Nodes"
metrics_path: "/metrics-node"
scheme: https
tls_config:
ca_file: "RootCA.crt"
file_sd_configs:
- files:
- "targets_nodes.yaml"
params:
collect[]:
- conntrack
- cpu
- filesystem
- loadavg
- meminfo
- netdev
- netstat
- vmstat
- job_name: "Tor-Snowflake-hx"
metrics_path: "/metrics-snowflake"
scheme: https
tls_config:
ca_file: "RootCA.crt"
file_sd_configs:
- files:
- "targets_snowflake-hx.yaml"
relabel_configs:
- source_labels: [__address__]
target_label: instance
regex: "([^:]+).*"
replacement: "${1}"
The targets lines for the Prometheus config are put into ~/tmp/tor-relays/*-targets.yaml.
To create at Hetzner cloud a new VPS with the hostname my_bridge under the project my_project, do:
hcloud context use my_project
./bin/create-server.sh my_bridge
The script ./bin/update-dns.sh expects unbound as a local DNS resolve and openrc as the init system, configured for the appropriate project:
include: "/etc/unbound/hetzner-<project>.conf"
(hcloud uses the term "context" for a project)
The scripts under ./bin work for the Hetzner Cloud API.
With the inventory given in the examples a git bisect to identify e.g. a linux kernel issue is done basically by something like:
name=hn0d-intel-main-bp-cl-0
good=v6.16-rc2
bad=HEAD
cd ~/devel/tor-relays
./bin/create-server.sh ${name}
cd ~/devel/linux
git bisect start --no-checkout
git bisect good ${good}
git bisect bad ${bad}
git bisect run ~/devel/tor-relays/bin/bisect.sh ${name}
git bisect log
git bisect reset
If you have a big inventory then increase your ulimits, e.g.
ulimit -S -n 4096