You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Instead of using the Akri Controller, Akri users can deploy their own Kubernetes Objects (Pods, Deployments, DaemonSets, etc) to use the Kubernetes resources created by Akri. This is explained in the Requesting Akri Resources documentation. However, if a device goes offline and an Instance is deleted, the workload will remain. A way to avoid this is to add OwnerReferences like the Akri Controller does by default. This ensures that the Pod/Deployment/Job only runs so long as the device exists:
The uid can be obtained by running kubectl get akrii --output=json | jq ".items[].metadata.uid"
We should provide a script -- maybe a series of jqueries -- for generating the OwnerReference that can be added to a PodSpec.
The text was updated successfully, but these errors were encountered:
@diconico07 checked. This does not interfere with the Controller's owner refs. This issue is available for anyone to pick up who wants to contribute to docs
Instead of using the Akri Controller, Akri users can deploy their own Kubernetes Objects (Pods, Deployments, DaemonSets, etc) to use the Kubernetes resources created by Akri. This is explained in the Requesting Akri Resources documentation. However, if a device goes offline and an Instance is deleted, the workload will remain. A way to avoid this is to add OwnerReferences like the Akri Controller does by default. This ensures that the Pod/Deployment/Job only runs so long as the device exists:
The uid can be obtained by running
kubectl get akrii --output=json | jq ".items[].metadata.uid"
We should provide a script -- maybe a series of jqueries -- for generating the
OwnerReference
that can be added to aPodSpec
.The text was updated successfully, but these errors were encountered: