See also: Hetzner Cloud Playbook, Virtualbox Playbook.
This repository contains setup scripts for all servers required to host an attack-defense CTF with the saarCTF framework. It consists of three server types:
- Gameserver (Controller): Databases, Scoreboard, Flag submitter, Control panels, Monitoring
- VPN: OpenVPN, Routing, Firewall, Anonymization
- Checker: Checker script runner (multiple instances possible)
You need a git configuration repository and a local config.json
configuration for the build process.
See Configuration for details.
To prepare server images (in VirtualBox for tests):
cd debian ; packer build -var-file=../config.json -only=virtualbox-iso buster.json
cd basis ; packer build -var-file=../config.json -only=virtualbox-ovf basis.json
cd controller ; packer build -var-file=../config.json -only=virtualbox-ovf controller.json
cd vpn ; packer build -var-file=../config.json -only=virtualbox-ovf vpn.json
cd checker ; packer build -var-file=../config.json -only=virtualbox-ovf checker.json
By default these IPs are used:
- VPN Gateway:
10.32.250.1
- Gameserver:
10.32.250.2
- Checker server:
10.32.250.3+
- Team N:
10.32.N.0/24
- Nop-Team:
10.32.1.0/24
- Organizer network:
10.32.0.0/24
Interesting commands: update-server
(pull updates and rebuild things).
Checker scripts belong on this machine (/home/saarctf/checkers
, owned by user saarctf
).
CTF Timer needs manual start (on exactly one machine), new hosts must be manually added to Prometheus for monitoring (/root/prometheus-add-server.sh <ip>
).
A bunch of interesting scripts are in /root
, check them out.
This image can be reconfigured to fill in almost any particular role, using the scripts in /root
to disable some components. We think of backup servers (databases to slaves replicating original databases), dedicated management / monitoring server (db to slave, disable monitoring on original host) or dedicated scoreboard / submitter server (db off, monitoring off, systemctl start scoreboard
).
Postgresql (:5432), Redis (:6379), RabbitMQ.
Flask app running under uwsgi / user saarctf. Nginx frontend.
http://<ip>:8080/
- Restart:
systemctl restart uwsgi
- Logs:
/var/log/uwsgi/app/controlserver.log
Tornado app (systemd), Nginx frontend.
http://<ip>:8081/
/http://127.0.0.1:20000
- Restart:
systemctl restart flower
- Logs:
/var/log/flower.log
PHP app served by Nginx / running as saarctf.
http://<ip>:8082/
- Restart:
systemctl restart php7.3-fpm nginx
- Logs:
/opt/icecoder/data/error.log
,/var/log/nginx/error.log
,/var/log/fpm-php.www.log
,/var/log/php7.3-fpm.log
Static folder with files, served by nginx.
http://<ip>/
- Restart:
systemctl restart nginx
- Logs:
/var/log/nginx/access.log
and/var/log/nginx/error.log
- Config:
/etc/nginx/sites-available/scoreboard
The scoreboard is automatically created if the CTF timer is running on this machine. If not use the scoreboard daemon instead (systemctl start scoreboard
, but not in parallel with the CTF timer).
Needs manual start
Triggers time-based events (new round, scoreboard rebuild, start/stop of CTF). Exactly one instance on one server must run at any time.
- Start:
systemctl start ctftimer
- Restart:
systemctl restart ctftimer
- Logs:
/var/log/ctftimer.log
(interesting messages usually shown in controlpanel dashboard)
C++ application that receives flags from teams.
nc <ip> 31337
- Restart:
systemctl restart submission-server
- Rebuild/Update:
update-server
- Logs:
/var/log/submission-server.log
Monitors itself, grafana and localhost by default, other servers should be manually added using /root/prometheus-add-server.sh <ip>
. Results can be seen in Grafana.
http://localhost:9090/
- Restart:
systemctl restart prometheus
- Logs:
journalctl -u prometheus
Configured to display stats from database and prometheus.
http://<ip>:3000/
- Restart:
systemctl restart grafana
- Logs:
/var/log/grafana/grafana.log
- Config:
/etc/grafana/grafana.ini
tcpdump needs manual start / stop
Runs many OpenVPN instances and network management. OpenVPN configuration files should be (re)built at least once on this machine.
One server per team, managed by systemd. Service name is vpn@teamXYZ
.
- Server for Team X listening on
<ip>:10000+X (udp)
- Interface of Team X:
tunX
- Start one / all:
systemctl start vpn@teamX
/systemctl start vpn
- Restart one / all:
systemctl restart vpn@teamX
/systemctl restart vpn vpn@\*
- Stop one / all:
systemctl stop vpn@teamX
/systemctl stop vpn vpn@\*
- Logs:
/var/log/openvpn/output-teamX.log
and/var/log/openvpn/openvpn-status-teamX.log
Writes traffic summary to database.
- Restart:
systemctl restart trafficstats
- Logs:
/var/log/trafficstats.log
Based on IPTables.
Edit /opt/gameserver/vpn/iptables.sh
if you need to change something permanently.
On restart, INPUT
and FORWARD
chains are replaced.
Inserts rules for NAT and TCP Timestamp removal.
- Restart:
systemctl restart firewall
- Logs:
/var/log/firewall.log
Inserts rules into IPTables that open/close VPN or ban single teams.
- Restart:
systemctl restart manage-iptables
- Logs:
/var/log/firewall.log
Needs manual start
Captures traffic: game traffic (between gameservers and teams) and team traffic (between teams).
- Start:
systemctl start tcpdump-game tcpdump-team
- Restart:
systemctl restart tcpdump-game tcpdump-team
- Logs:
/var/log/tcpdump.log
Website that displays connection status of VPN connections and tests connectivity using ping.
Website is finally served by an nginx.
Includes a background worker (service vpnboard-celery
).
- Restart:
systemctl restart vpnboard
- Logs:
/var/log/vpnboard.log
Runs only a Celery Worker. No checker scripts need to be placed on this machine.
Needs manual start because each celery worker needs an unique name.
After creation run celery-configure <SERVER-NUMBER>
.
From now on, the celery worker will start on boot.
- Configuration:
/etc/celery.conf
- Restart:
systemctl restart celery
- Logs:
/var/log/celery/XYZ.log
Manual worker invocation:
screen -R celery
celery-run <unique-hostname> <number-of-processes>