You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we use a ~100ms time delay in deprovisioning to overcome a race condition where deleting a model immediately after undeploying it would fail.
ML Commons is adding a retry capability to the delete step which would avoid the need for this time delay.
What solution would you like?
Update our code when the above linked PR is merged, to use a retry delay on the delete request, rather than our own time delay.
What alternatives have you considered?
Leave it as-is.
Do you have any additional context?
We still may need some delay regarding deleting connectors if the model isn't deleted, so this may not be a complete fix-all, but should at least make deprovisioning more robust.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem?
Currently we use a ~100ms time delay in deprovisioning to overcome a race condition where deleting a model immediately after undeploying it would fail.
ML Commons is adding a retry capability to the delete step which would avoid the need for this time delay.
What solution would you like?
Update our code when the above linked PR is merged, to use a retry delay on the delete request, rather than our own time delay.
What alternatives have you considered?
Leave it as-is.
Do you have any additional context?
We still may need some delay regarding deleting connectors if the model isn't deleted, so this may not be a complete fix-all, but should at least make deprovisioning more robust.
The text was updated successfully, but these errors were encountered: