Skip to content

zephyros-dev/media-server

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8,428 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Selfhost Server

Deployment

Code for running self-hosted services using podman and ansible

Want to know more about selfhosting a media server? Checkout the docs

Infrastructure graph

Networking

flowchart TB
  subgraph internet
    http_client
    wireguard_client
    subgraph github[Github]
      github_action_runner
    end
    dns_name_server[DNS Name Server]
  end
  subgraph lan_network
    subgraph media_server
      intel_amt[Intel AMT]
      subgraph container_network
        caddy -- reverse proxy --> applications
      end
      server_port -- 80 and 443 --> caddy
    end
    subgraph pc
      windows
    end
  end
  subgraph openwrt_router
    http_client --> port_forward
    github_action_runner --> wireguard
    wireguard_client --> wireguard
    port_forward -- 80 and 443 --> server_port
    dns_name_server <-- update dynamic public IPv4 -->  ddns_client_v4
    dns_name_server <-- update dynamic public IPv6 prefix -->  ddns_client_v6
  end
  wireguard --> lan_network
Loading

Data

flowchart TB
  subgraph media_server
    subgraph os_disk
    end
    subgraph data_disk
      subgraph storage_disk
      storage_disk_1
      storage_disk_2
      end
      subgraph parity_disk
      parity_disk_1
      end
    end
  end
Loading

Note

  • Always run partition playbook with --check first

    ansible-playbook partition --check

PC

Mounting server into PC with libvirt

  • My server hardware failed recently due to a botched bios upgrade from fwupd, so I had to migrate the server to my current PC for temporary usage. Luckily qemu came to the rescue, and setting it to boot all the disks was possible. I did not write the code to setup the qemu machine since it's only a temporary solution until I build a new NAS, so I figure I'll write some tips here for my future self for reference should this ever happened again
    1. Setup qemu for Linux machine
      • On bazzite I just use ujust setup-virtualization
        • Enable Virtualization
        • Add user to libvirt group
        • Install the virt-manager as user Flatpak
        • Enable VFIO drivers
    2. Setup bridge interface for machine (with eno1 being the network interface)
      • nmcli con down eno1
      • nmcli con delete eno1
      • nmcli con add type bridge ifname br0
      • nmcli con add type bridge-slave ifname eno1 master br0
    3. Setup machine on virt-manager
      • Manual Install
      • Coreos (or whatever linux flavor)
      • Memory + CPUS
      • Untick Enable storage for this virtual machine
      • Tick Customize configuration before install
        • NIC
          • Network Source: Bridge device
          • Device name: br0
          • MAC Address: Use old mac address server so it can work with static router DHCP
        • Add these PCI host device
          • The OS SSD
          • The Motherboard SATA Controller (AHCI)
            • This is needed so that the disks are passthrough into the VM, the same as the previous server
        • Autostart the machine by the command (virt-manager option on the UI doesn't help for some reason)
          • sudo virsh autostart <machine-name>

Intel AMT support

  • It's possible to control the PC remotely from BIOS via Intel AMT
  • Setup:
  1. Enable Intel AMT
  2. Enable the integrated GPU in the BIOS in case of using a discrete GPU (NVIDA, AMD)
  3. Set the integrated GPU as the default GPU in the BIOS
  4. Use Intel software for setting up KVM (remote mouse and keyboard) to the PC. For cross-platform open source solution, checkout Meshcentral in a container

Boot from Nvme

  • I used a HPZ230 for the server with an NVME hard drive in the PCIE slot.
  • The mainboard does not allow booting from PCIE slot directly, so I have to boot from Cloverboot installed in an USB.
    • It can be installed by downloading the release from the github page and burn the ISO to the USB. The name is CloverISO-<revision>.tar.lzma
    • After burning the ISO into USB, copy the EFI\CLOVER\drivers\off\NvmExpressDxe.efi to EFI\CLOVER\drivers\UEFI
    • In the BIOS, set the boot order to boot from USB first
    • Then set the following settings in the BIOS
      • Advanced -> Option ROM Launch Policy -> Storage Options Rom -> UEFI only

WOL

  • In the BIOS, enable wol via Advanced -> Device Options -> S5 Wake on LAN

OS

Ucore (rpm-ostree variants)

https://github.com/ublue-os/ucore

Services

Koreader connection to opds Calibre content server
  • The koreader opds requires /opds path to the calibre content server
  • The calibre content server authentication need to be digest for the koreader opds

Maintenance notes

Postgres major version update

  • To upgrade postgres major version, do the following
    1. Change the postgres_action key in variable files to export and run the playbook for that container
    2. Change the image tag to the next major version
    3. Change the postgres_action key in variable files to import and run the playbook for that container
    4. Change the postgres_action key in variable files to none and run the playbook for that container
    5. Check if the container startup correctly
    6. Change the postgres_action key in variable files to clean and run the playbook to cleanup the previous backup

Restic restores

  1. Source the secret files to current shell

    ## Fish shell
    . (sed 's/^/export /' /etc/restic/restic.env | psub)
  2. Get list of snapshots

    restic snapshots
  3. Restore the files (remove the --dry-run once satisfied)

    restic restore -v <snapshot-id> --target / --include <absolute-path-to-restore> -v --dry-run

Troubleshooting

Kavita

Kavita failed to save progress

Nextcloud

Stuck in maintenance mode
  1. Rebuild nextcloud

    ansible-playbook main.yaml --tags nextcloud
Debug postgres
  1. Run the following command to enable adminer container for accessing postgres database

    ansible-playbook main.yaml --tags nextcloud --extra-vars '{"debug":true}'

Container name with pod exists

Pymedusa

Pymedusa failed create hardlink
Check failed to hardlink file
  • Run this command in the Video folder

    find . -type f -links 1 ! -name "*.srt" -print

Reinstallation note

  • If the server is reinstalled, some steps need to be taken:

    • Podman: Reset podman for rootless user
    podman system reset
  • Renovate can be visisted at: https://developer.mend.io

Misc

  1. Why HP Z230?
  • The PC itself is a bit old, and the bios is no longer updated. However, it is good for home usage due to the following reasons:
    • Can be cheaply build with a Xeon E3-1230v3 CPU
    • Has 4 DIMM DDR3 slots, and support for ECC memory. DDDR4 ECC memory can be expensive
    • Has 2 GPU slots, though I don't really need SLI
    • Has Intel AMT support, so I can have headless remote access to the BIOS for troubleshooting
  • It has some annoyances however:
    • The mainboard has no Nvme slot, and does not allow booting from PCIE slot directly, but can be solved via Cloverboot option from above
    • Has little room for HDD (2 by default), but can be solved by using a HDD cage

About

Selfhost media server

Topics

Resources

License

Stars

Watchers

Forks

Contributors