-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Deploy Frigate Argo Application with proper configuration
Goal
Create a new Ansible role that deploys Frigate NVR to the homelab k3s cluster via an Argo CD Application resource, following the same patterns and quality bar as the existing deployment roles:
ansible/roles/homepage_deployansible/roles/longhorn_deployansible/roles/tailscale_operator_deployansible/roles/synology_csi_deploy
This role should render a complete, production ready Argo CD Application spec using the official Frigate Helm chart from blakeshome-charts, wired into the existing HA k3s + Tailscale + Longhorn + Synology stack.([GitHub]1)
Scope
This issue is only for:
- Creating
ansible/roles/frigate_deploy - Defining its argument specs and defaults
- Rendering the Frigate Argo
ApplicationYAML and HelmvaluesObjectinto the standard artifacts tree - Optionally applying the Application, if that matches the pattern of the other deploy roles
It does not cover configuring individual cameras, MQTT broker deployment, Home Assistant integration, or detailed Frigate configuration beyond what is required for a sane initial deployment.
Role structure and conventions
Follow the same structure and conventions as the existing *_deploy roles:
ansible/roles/frigate_deploy/meta/argument_specs.ymlansible/roles/frigate_deploy/defaults/main.ymlansible/roles/frigate_deploy/tasks/main.yml
Rules:
- Use fully qualified collection names for all Ansible modules.
- All variables for this role must be prefixed with
frigate_deploy_. - The role must be idempotent and pass Ansible lint.
- Use the existing
role_artifactsrole exactly the same way other deploy roles do as the single source of truth for where rendered files are written. - Keep tasks minimal and focused. If generic Argo handling starts to creep in, that belongs in shared roles in future tickets, not in this one.
Argo CD Application spec requirements
Use Argo CD’s Application spec as the reference.
The role should:
-
Render a complete Argo
Applicationmanifest for Frigate:-
apiVersion: argoproj.io/v1alpha1 -
kind: Application -
metadata.name: frigate -
metadata.namespace: Argo CD namespace, derived or provided the same way as in other deploy roles. -
spec.project: the same Argo project used for other homelab applications. -
spec.source: -
spec.destination:server: in cluster API server, consistent with the other deploy roles.namespace: defaultfrigate, configurable viafrigate_deploy_namespace.
-
spec.syncPolicy:- Match the sync policy pattern used by the other homelab app roles (automated or manual,
syncOptions,retry).
- Match the sync policy pattern used by the other homelab app roles (automated or manual,
-
-
Write the Application YAML to the artifacts directory using
role_artifacts:-
Follow the same pattern as the other application deploy roles. For example:
.artifacts/{{ deploy_env }}/argo/applications/frigate-application.yaml
-
All paths must be derived from
role_artifactsoutputs and not hard coded.
-
-
Optionally apply the Application to the cluster:
- Provide
frigate_deploy_apply_application(bool). - If
true, apply usingkubernetes.core.k8sor akubectlbased flow consistent with the existing deploy roles, including their CLI validation and context handling.
- Provide
Helm configuration and values
Use the official Frigate Helm chart from blakeshome-charts which is the published chart on ArtifactHub.([Artifact Hub]2)
Key rules:
- Only put overrides in
spec.source.helm.valuesObject. - If the chart default for a value is sufficient, do not include it in
valuesObject. frigate_deploy_values_overridesshould be a dict that is merged directly intovaluesObjectand is the single source of override truth.
Values must:
-
Respect the HA k3s layout:
- Ensure the Frigate pod(s) schedule correctly on HA k3s.
- If the chart exposes replica count, resource requests and limits, affinity or tolerations, only override these where needed for this cluster, and leave all other defaults alone.
- Allow for hardware acceleration configuration (for example Coral, Nvidia, Intel vaapi) via
frigate_deploy_values_overridesbut do not hard code any particular accelerator mode. Expose the chart’sgpusettings and host device mappings as pass through values.([Frigate]3)
-
Integrate with Longhorn and Synology storage:
Frigate uses persistent storage for configuration, database, and media (clips, recordings).([gitea]4)
-
Enable persistence in the chart using its built in
persistenceconfiguration. -
Default PVCs that hold configuration and database style data to a Longhorn backed storage class:
- Use
frigate_deploy_storage_class_longhornfor those PVCs, mapped into the chart’spersistence.*.storageClassNameor equivalent keys.
- Use
-
If the chart allows separate paths for recordings and clips, expose an option to place those on Synology via the Synology CSI or NFS:
- Use
frigate_deploy_storage_class_synologyand map it into the appropriatepersistencesections when desired.
- Use
-
Do not re state defaults from the chart. Only set the minimum values needed to select storage classes and sizes that fit your environment.
-
-
Networking, MQTT, and camera access:
-
Frigate requires an MQTT broker and access to IP cameras on the LAN.([gitea]4)
-
This role does not deploy MQTT. It must assume that the broker already exists and is reachable.
-
Expose basic connectivity configuration in
frigate_deploy_values_overridesfor:- MQTT broker host, port, username, password, and topic base.
- Initial Frigate config file reference if the chart uses a ConfigMap or secret for
config.yml.
-
The Service type should be appropriate for the homelab network:
- Use
frigate_deploy_service_typeto control whether the primary Frigate service isClusterIP,NodePort, orLoadBalancer. - Defaults should be consistent with how other internal apps in the homelab are exposed to the LAN.
- Use
-
Camera access is expected to happen over the regular LAN. No cameras should be required to use Tailscale.
-
-
UI exposure via Tailscale ingress:
Frigate has a web UI and it must be exposed, but only via Tailscale:
-
If the chart provides an ingress section, configure it so that:
ingress.enabled: true.ingress.ingressClassName: tailscale.- Ingress annotations include the Tailscale operator annotations you already use in other roles.
- The host name is driven by
frigate_deploy_hostname.
-
If the chart does not provide ingress configuration that can meet these needs cleanly, render a separate Ingress manifest alongside the Application that:
- Lives in the Frigate namespace.
- Uses
ingressClassName: tailscale. - Has labels and annotations consistent with your other Tailscale only ingresses.
-
No ingress should use any other ingress class or expose Frigate directly to the internet.
-
Homepage “dummy” ingress for discovery
Frigate should appear in Homepage just like other key services.
-
Either via chart values or a separate Helm free manifest, define an Ingress suitable for Homepage service discovery:
-
ingressClassName: tailscale. -
Backend pointing to the Frigate HTTP service and port.
-
Homepage annotations such as:
gethomepage.dev/enabled: "true"gethomepage.dev/name: "Frigate"gethomepage.dev/description: "NVR with realtime object detection"gethomepage.dev/group: "Cameras"gethomepage.dev/iconset appropriately.
-
Tailscale annotations so the UI is reachable only from the tailnet.
-
-
Follow the same annotation and dummy ingress patterns as
homepage_deployand your other Argo deployed apps.
Integration with existing roles and stack
This role expects:
- k3s cluster is already up and reachable.
- Longhorn is deployed and healthy.
- Synology CSI or NFS storage is configured and available.
- Tailscale operator is deployed and working.
- Argo CD is installed and ready.
frigate_deploy should run in the same phase as other application deploy roles after the base platform and storage layers are in place.
Inputs and variables
Define all inputs in meta/argument_specs.yml and provide defaults in defaults/main.yml. At a minimum:
frigate_deploy_namespace(defaultfrigate)frigate_deploy_argo_namespacefrigate_deploy_argo_projectfrigate_deploy_helm_repo_url(defaulthttps://blakeblackshear.github.io/blakeshome-charts/)frigate_deploy_helm_chart_name(defaultfrigate)frigate_deploy_helm_chart_versionfrigate_deploy_values_overrides(dict mapped directly intospec.source.helm.valuesObject)frigate_deploy_hostname(UI host name)frigate_deploy_tailscale_annotations(dict merged into ingress metadata)frigate_deploy_storage_class_longhornfrigate_deploy_storage_class_synology(optional)frigate_deploy_service_typefrigate_deploy_apply_application(bool)
All variables must start with frigate_deploy_. Any transient or derived values should remain inside the role and use idiomatic Ansible naming.
Idempotency and testing
- Running the role multiple times must not create duplicate Applications, Ingresses, or PVCs.
- When the rendered Application and associated manifests match what is already applied, the role should report no changes.
- If
frigate_deploy_helm_chart_versionorfrigate_deploy_values_overrideschange, the rendered manifests must update, and iffrigate_deploy_apply_applicationis true, Argo should converge the live state to match. - The role must pass Ansible lint and comply with existing repository standards.
Acceptance criteria
-
A new role exists at
ansible/roles/frigate_deploywith:meta/argument_specs.ymldeclaring all required inputs withfrigate_deploy_prefixes.defaults/main.ymlproviding sane defaults for the homelab.tasks/main.ymlimplementing the behavior described above.
-
role_artifactsis used correctly to write the Frigate Argo Application manifest to the expected.artifacts/{{ deploy_env }}/...location. -
The rendered Argo Application:
- Uses the official Frigate Helm chart from
blakeshome-charts. - Targets the correct namespace and in cluster destination.
- Configures storage using Longhorn and optionally Synology without overriding unnecessary defaults.
- Exposes the Frigate UI only through Tailscale ingress with the correct annotations.
- Uses the official Frigate Helm chart from
-
A Homepage compatible dummy ingress exists so Frigate appears in the Homepage dashboard.
-
Re running the role with unchanged inputs is idempotent.
-
All Ansible lint checks pass and the role matches the structural and behavioral patterns of
homepage_deploy,longhorn_deploy,tailscale_operator_deploy, andsynology_csi_deploy.