-
Notifications
You must be signed in to change notification settings - Fork 40
k3d: unable to start built container #46
Comments
Where can I increase the log level, or any tips on debugging? |
Can you rebuild but tag it something like |
For the default buildkit pod created The logs are
I looked at k3s node, which has containerd sock in a different location than expected?
|
Gave it a try with a tag, but still same effect
|
Does the image get loaded back into docker when you run |
I have the same problem when using microk8s. Build works fine but image is not uploaded to containerd. I tried settings containerd.sock location: kubectl buildkit create --runtime containerd --containerd-sock=/var/snap/microk8s/common/run/containerd.sock; But after this even building started to fail |
@MarcusAhlfors could you file a separate issue for microk8s? We've tested it out on a lot of platforms, but clearly not enough! :-D |
I am running k3s, which uses containerd. No, the image does not get loaded into containerd. If I follow the flags used by @MarcusAhlfors with I get the following error
events
|
Regarding the previous error, I think the mount should be configurable? I modified the deployment of buildkit while in failed state:
But I ended in this error on pod events
Which seems similar to this Which is caused by Bidirectional setting to allow for the mounts to be picked up. I guess, maybe k3d isn't a good env for kubectl-buildkit? Suggestions? |
As a work-around you can |
@juliostanley you mention in the opening comment Is your kubelet configured to use containerd or dockerd? (I'm assuming containerd, but please confirm) Is dockerd also running inside your system, and is there a If there is a dockerd, and you're using containerd for kubernetes, it's possible the builder is auto-selecting dockerd incorrectly, assuming that's your runtime, then storing images there, which are not visible to kubernetes via containerd. If that's what's going on, using |
@dhiltgen Yeah, it may sound a little confusing, and its actually is part of the issue, due to the need for Bidirectional mounts. So here is what I noticed (based on my previous comments)
Basically it seems like k3d is not a good environment for kubectl-buildkit, and the only option for it is if you are using a registry, as described by @pdevine, although that eliminates one of the use cases (not transferring bytes up and down from the registry, and needing a registry) Hope this clarified the environment |
Thanks for the clarification! The way containerd works is the gRPC API requires the "client" to be "local" - it's not a network API like the kubernetes API or dockerd API. The client libraries require access to specific host paths so that files can be placed there so child containers can access them, hence the bidirectional mounts. This is only needed if we're using containerd to facilitate the containers used during the image build. It sounds like k3d isn't going to work unless/until those mounts are refined upstream for the containerd runtime model. It's possible #26 might wind up building out an alternative strategy which could be employed here. We might be able to approach this by using the ~rootless model (not building inside containerd) then load the images directly through a proxy which I believe can load images purely over the containerd.sock without having to touch the filesystem. |
What steps did you take and what happened
kubectl build . -t test
kubectl run -i --tty test --image=test -- sh
What did you expect to happen
Container starts on single node k8s
Environment Details:
Builder Logs
Dockerfile
N/A happens with any simple file
Vote on this request
This is an invitation to the community to vote on issues. Use the "smiley face" up to the right of this comment to vote.
The text was updated successfully, but these errors were encountered: