Code for running self-hosted services using podman and ansible
Want to know more about selfhosting a media server? Checkout the docs
flowchart TB
subgraph internet
http_client
wireguard_client
subgraph github[Github]
github_action_runner
end
dns_name_server[DNS Name Server]
end
subgraph lan_network
subgraph media_server
intel_amt[Intel AMT]
subgraph container_network
caddy -- reverse proxy --> applications
end
server_port -- 80 and 443 --> caddy
end
subgraph pc
windows
end
end
subgraph openwrt_router
http_client --> port_forward
github_action_runner --> wireguard
wireguard_client --> wireguard
port_forward -- 80 and 443 --> server_port
dns_name_server <-- update dynamic public IPv4 --> ddns_client_v4
dns_name_server <-- update dynamic public IPv6 prefix --> ddns_client_v6
end
wireguard --> lan_network
flowchart TB
subgraph media_server
subgraph os_disk
end
subgraph data_disk
subgraph storage_disk
storage_disk_1
storage_disk_2
end
subgraph parity_disk
parity_disk_1
end
end
end
-
Always run partition playbook with --check first
ansible-playbook partition --check
- My server hardware failed recently due to a botched bios upgrade from fwupd, so I had to migrate the server to my current PC for temporary usage. Luckily qemu came to the rescue, and setting it to boot all the disks was possible. I did not write the code to setup the qemu machine since it's only a temporary solution until I build a new NAS, so I figure I'll write some tips here for my future self for reference should this ever happened again
- Setup qemu for Linux machine
- On bazzite I just use
ujust setup-virtualization- Enable Virtualization
- Add
userto libvirt group - Install the virt-manager as user Flatpak
- Enable VFIO drivers
- On bazzite I just use
- Setup bridge interface for machine (with
eno1being the network interface)nmcli con down eno1nmcli con delete eno1nmcli con add type bridge ifname br0nmcli con add type bridge-slave ifname eno1 master br0
- Setup machine on virt-manager
- Manual Install
- Coreos (or whatever linux flavor)
- Memory + CPUS
- Untick
Enable storage for this virtual machine - Tick
Customize configuration before install- NIC
- Network Source: Bridge device
- Device name:
br0 - MAC Address: Use old mac address server so it can work with static router DHCP
- Add these PCI host device
- The OS SSD
- The Motherboard SATA Controller (AHCI)
- This is needed so that the disks are passthrough into the VM, the same as the previous server
- Autostart the machine by the command (virt-manager option on the UI doesn't help for some reason)
sudo virsh autostart <machine-name>
- NIC
- Setup qemu for Linux machine
- It's possible to control the PC remotely from BIOS via Intel AMT
- Setup:
- Enable Intel AMT
- Enable the integrated GPU in the BIOS in case of using a discrete GPU (NVIDA, AMD)
- Set the integrated GPU as the default GPU in the BIOS
- Use Intel software for setting up KVM (remote mouse and keyboard) to the PC. For cross-platform open source solution, checkout Meshcentral in a container
- I used a HPZ230 for the server with an NVME hard drive in the PCIE slot.
- The mainboard does not allow booting from PCIE slot directly, so I have to boot from Cloverboot installed in an USB.
- It can be installed by downloading the release from the github page and burn the ISO to the USB. The name is
CloverISO-<revision>.tar.lzma - After burning the ISO into USB, copy the EFI\CLOVER\drivers\off\NvmExpressDxe.efi to EFI\CLOVER\drivers\UEFI
- In the BIOS, set the boot order to boot from USB first
- Then set the following settings in the BIOS
- Advanced -> Option ROM Launch Policy -> Storage Options Rom -> UEFI only
- It can be installed by downloading the release from the github page and burn the ISO to the USB. The name is
- In the BIOS, enable wol via Advanced -> Device Options -> S5 Wake on LAN
https://github.com/ublue-os/ucore
- The koreader opds requires
/opdspath to the calibre content server - The calibre content server authentication need to be
digestfor the koreader opds
- To upgrade postgres major version, do the following
- Change the
postgres_actionkey in variable files toexportand run the playbook for that container - Change the image tag to the next major version
- Change the
postgres_actionkey in variable files toimportand run the playbook for that container - Change the
postgres_actionkey in variable files tononeand run the playbook for that container - Check if the container startup correctly
- Change the
postgres_actionkey in variable files tocleanand run the playbook to cleanup the previous backup
- Change the
-
Source the secret files to current shell
## Fish shell . (sed 's/^/export /' /etc/restic/restic.env | psub)
-
Get list of snapshots
restic snapshots
-
Restore the files (remove the --dry-run once satisfied)
restic restore -v <snapshot-id> --target / --include <absolute-path-to-restore> -v --dry-run
- Consult this
-
Rebuild nextcloud
ansible-playbook main.yaml --tags nextcloud
-
Run the following command to enable adminer container for accessing postgres database
ansible-playbook main.yaml --tags nextcloud --extra-vars '{"debug":true}'
-
Check if the container exists as external storage in podman then remove that container
podman ps --external
-
Reference
- Check this
-
Run this command in the Video folder
find . -type f -links 1 ! -name "*.srt" -print
-
If the server is reinstalled, some steps need to be taken:
- Podman: Reset podman for rootless user
podman system reset
-
Renovate can be visisted at: https://developer.mend.io
- Why HP Z230?
- The PC itself is a bit old, and the bios is no longer updated. However, it is good for home usage due to the following reasons:
- Can be cheaply build with a Xeon E3-1230v3 CPU
- Has 4 DIMM DDR3 slots, and support for ECC memory. DDDR4 ECC memory can be expensive
- Has 2 GPU slots, though I don't really need SLI
- Has Intel AMT support, so I can have headless remote access to the BIOS for troubleshooting
- It has some annoyances however:
- The mainboard has no Nvme slot, and does not allow booting from PCIE slot directly, but can be solved via Cloverboot option from above
- Has little room for HDD (2 by default), but can be solved by using a HDD cage