-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Map/Parallel States checking container_runner#status! of finished states #296
Fix Map/Parallel States checking container_runner#status! of finished states #296
Conversation
@@ -61,7 +61,8 @@ def finish(context) | |||
end | |||
|
|||
def running?(context) | |||
return true if waiting?(context) | |||
return true if waiting?(context) | |||
return false if finished?(context) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the fix for this issue, I'd also like to make sure we don't check running? for a sub-workflow that we know to be finished but this is also good insurance at the Task level.
@@ -182,8 +182,8 @@ def docker_event_status_to_event(status) | |||
|
|||
def inspect_container(container_id) | |||
JSON.parse(docker!("inspect", container_id).output).first | |||
rescue | |||
nil | |||
rescue AwesomeSpawn::CommandResultError => err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not for this PR, but it's weird to me that docker! method fails and we catch AwesomeSpawn::CommandResultError, as that requires the caller to know the implementation of the callee. Does it make more sense to have docker!
do the rescue/catch and raise something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That being said, it's all self contained in the same file, so probably not a big deal.
If a state is finished, map/parallel states are re-checking to see if these are ready which causes us to check status of a container that has already been removed.
This leads to:
kubeclient-4.12.0/lib/kubeclient/common.rb:130:in
rescue in handle_exception': pods "floe-sleep-f9149969" not found (Kubeclient::ResourceNotFoundError)`This was only seen on kubernetes/openshift because docker&podman were eating the exception and returning
nil
.Since I don't think you should check the status of a container that we already knowingly deleted, checking container status and getting an error back should be a Floe::ExecutionError.