You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wanted to use the official Varnish Helm chart to deploy a Varnish cluster with HPA enabled.
When I send a PURGE request, Can you explain how purging works in this setup and provide recommendations for ensuring consistent purging across all replicas?
Wanted to ensure cache invalidated across all replicas & get the acknowledgment for the same.
The text was updated successfully, but these errors were encountered:
Hi there! Varnish Helm Chart currently doesn't do anything special with regarding networking and purging. In the Enterprise version, we have varnish-broadcaster and varnish-discovery for this purpose, but for the open-source version you have to manually do this for every instance.
Right now, the only way to iterate the list of all available Varnish instances is to set up Varnish Helm Chart in a headless ClusterIP mode, e.g.:
server:
service:
type: ClusterIPclusterIP: "None"
This allows service-name.namespace.svc.cluster.local to resolve an A record to each individual Varnish instance. Then, on the component that's sending purges, you can make sure it sends PURGE to all A records returned by the cluster's DNS.
Alternatively, there's something like varnish-towncrier that might work (although I have not tried it myself). This could be set up through server.extraContainers. Since towncrier doesn't rely on Varnish VSM, it should be pretty straightforward to set up:
server:
extraContainers:
- name: towncrierimage: ghcr.io/emgag/varnish-towncrier:latestargs:
- listenenv:
- name: VT_REDIS_URIvalue: redis://redis-service# Match this with `server.http.port`
- name: VT_ENDPOINT_URIvalue: http://127.0.0.1:6081/# the rest of the configuration
I wanted to use the official Varnish Helm chart to deploy a Varnish cluster with HPA enabled.
When I send a PURGE request, Can you explain how purging works in this setup and provide recommendations for ensuring consistent purging across all replicas?
Wanted to ensure cache invalidated across all replicas & get the acknowledgment for the same.
The text was updated successfully, but these errors were encountered: