Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error processing tar file(archive/tar: missed writing 4096 bytes): #777

Closed
Sheldor5 opened this issue Jul 13, 2021 · 47 comments · Fixed by #793 or #1631
Closed

Error processing tar file(archive/tar: missed writing 4096 bytes): #777

Sheldor5 opened this issue Jul 13, 2021 · 47 comments · Fixed by #793 or #1631
Assignees
Labels
bug Something isn't working
Milestone

Comments

@Sheldor5
Copy link

Sheldor5 commented Jul 13, 2021

Description

The jkube:build goal failes on CentOS Linux release 7.9.2009 (Core) with:

[ERROR] Failed to execute goal org.eclipse.jkube:kubernetes-maven-plugin:1.3.0:build (default) on project myapp-docker: Failed to execute the build: Error while trying to build the image: Unable to build image [someco/myapp:latest] : {"message":"Error processing tar file(archive/tar: missed writing 4096 bytes): "} (Internal Server Error: 500) -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.eclipse.jkube:kubernetes-maven-plugin:1.3.0:build (default) on project myapp-docker: Failed to execute the build
        at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213)
        at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:154)
        at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:146)
        at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
        at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
        at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
        at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
        at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:309)
        at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:194)
        at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:107)
        at org.apache.maven.cli.MavenCli.execute(MavenCli.java:993)
        at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:345)
        at org.apache.maven.cli.MavenCli.main(MavenCli.java:191)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
        at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
        at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
        at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.plugin.MojoExecutionException: Failed to execute the build
        at org.eclipse.jkube.maven.plugin.mojo.build.AbstractDockerMojo.buildAndTag(AbstractDockerMojo.java:665)
        at org.eclipse.jkube.maven.plugin.mojo.build.AbstractDockerMojo.processImageConfig(AbstractDockerMojo.java:503)
        at org.eclipse.jkube.maven.plugin.mojo.build.AbstractDockerMojo.executeBuildGoal(AbstractDockerMojo.java:632)
        at org.eclipse.jkube.maven.plugin.mojo.build.BuildMojo.executeInternal(BuildMojo.java:49)
        at org.eclipse.jkube.maven.plugin.mojo.build.AbstractDockerMojo.execute(AbstractDockerMojo.java:444)
        at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
        at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
        ... 20 more
Caused by: org.eclipse.jkube.kit.config.service.JKubeServiceException: Error while trying to build the image: Unable to build image [someco/myapp:latest] : {"message":"Error processing tar file(archive/tar: missed writing 4096 bytes): "} (Internal Server Error: 500)
        at org.eclipse.jkube.kit.config.service.kubernetes.DockerBuildService.build(DockerBuildService.java:52)
        at org.eclipse.jkube.maven.plugin.mojo.build.AbstractDockerMojo.buildAndTag(AbstractDockerMojo.java:662)
        ... 26 more
Caused by: org.eclipse.jkube.kit.build.service.docker.access.DockerAccessException: Unable to build image [someco/myapp:latest] : {"message":"Error processing tar file(archive/tar: missed writing 4096 bytes): "} (Internal Server Error: 500)
        at org.eclipse.jkube.kit.build.service.docker.access.hc.DockerAccessWithHcClient.buildImage(DockerAccessWithHcClient.java:272)
        at org.eclipse.jkube.kit.build.service.docker.BuildService.doBuildImage(BuildService.java:179)
        at org.eclipse.jkube.kit.build.service.docker.BuildService.buildImage(BuildService.java:143)
        at org.eclipse.jkube.kit.build.service.docker.BuildService.buildImage(BuildService.java:77)
        at org.eclipse.jkube.kit.config.service.kubernetes.DockerBuildService.build(DockerBuildService.java:44)
        ... 27 more
Caused by: org.eclipse.jkube.kit.build.service.docker.access.hc.http.HttpRequestException: {"message":"Error processing tar file(archive/tar: missed writing 4096 bytes): "} (Internal Server Error: 500)
        at org.eclipse.jkube.kit.build.service.docker.access.hc.ApacheHttpClientDelegate$StatusCodeCheckerResponseHandler.handleResponse(ApacheHttpClientDelegate.java:200)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:223)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:165)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:140)
        at org.eclipse.jkube.kit.build.service.docker.access.hc.ApacheHttpClientDelegate.post(ApacheHttpClientDelegate.java:121)
        at org.eclipse.jkube.kit.build.service.docker.access.hc.DockerAccessWithHcClient.buildImage(DockerAccessWithHcClient.java:270)
        ... 31 more

The same project is working on Windows.

The TAR file looks fine with tar -tf docker-build.tar.

The image can be built manually by going into the assembly directory and running docker build . so docker and the docker daemon are working.

The TAR file is somehow not valid for the docker daemon:

[root@srv3 tmp]# cat docker-build.tar | docker build -
Sending build context to Docker daemon   212 kB
Error response from daemon: Error processing tar file(archive/tar: missed writin 4096 bytes):
[root@srv3 tmp]#

Info

  • Eclipse JKube version : 1.3.0
  • Maven version (mvn -v) : 3.6.3
@davidecavestro
Copy link
Contributor

davidecavestro commented Jul 20, 2021

We are experiencing something similar.

The build is ok on dev local envs (recent Debian, Ubuntu) but fails on an old Docker 1.12.1 (Git commit: 23cf638) using OpenJDK 1.8.0_111-b15 and Maven 3.3.9 (jkube k8s maven plugin 1.3.0) on CentOS Linux release 7.2.1511

In this case the build fails with a "Connection reset by peer" obscure message, maybe due to the old docker daemon version that internally logs the archjive/tar error message, but - again - manually sending the tar generated by the plugin to the docker daemon we get the very same error

cat target/docker/path/to/tmp/docker-build.tar | docker build -
Error response from daemon: Error processing tar file(archive/tar: missed writing 20 bytes):

even if the content of the tar seems ok

[root@govbuildlx01 jkube]# tar tvf target/docker/path/to/tmp/docker-build.tar
drwxr-xr-x 0/0              20 2021-07-20 14:11 opt/
drwxr-xr-x 0/0              16 2021-07-20 14:11 opt/xyz/
drwxr-xr-x 0/0              21 2021-07-20 14:11 opt/xyz/app/
-rw-r--r-- 0/0             107 2021-07-20 14:11 Dockerfile
-rw-r--r-- 0/0        19337774 2021-07-20 14:11 opt/xyz/app/test.jar

@manusa
Copy link
Member

manusa commented Jul 20, 2021

Is there a way to reproduce the issue so we can further investigate?

? Is the result the same if you load->build the tar using:

docker load -i target/docker/path/to/tmp/docker-build.tar

@davidecavestro
Copy link
Contributor

davidecavestro commented Jul 20, 2021

The docker load command fails with

open /var/lib/docker/tmp/docker-import-738275560/opt/json: no such file or directory

both on the problematic server and in dev local env.

At https://github.com/davidecavestro/vagrant-jkube-connection-reset-by-peer I've shared some stuff to reproduce the issue on a vagrant box.

@manusa manusa added the bug Something isn't working label Jul 21, 2021
@davidecavestro
Copy link
Contributor

The issue is consistently reproducible from docker-engine 1.12 up to docker-ce 17.12
(with rpms respectively from https://mirrors.aliyun.com/docker-engine/yum/repo/main/centos/7/Packages/ and https://mirrors.aliyun.com/docker-ce/linux/centos/7.2/x86_64/stable/Packages/)

@manusa
Copy link
Member

manusa commented Jul 21, 2021

So, do you mean this is specific to the environment (Docker version + OS)? i.e. It does work on new versions of Docker ??

I'm checking your vagrant example (:raised_hands: thanks for this). But I had some trouble running that on Fedora, also I think the vagrant file is not provisioning the project files (checking this now in another machine).

@davidecavestro
Copy link
Contributor

davidecavestro commented Jul 21, 2021

The docker version seems to play a role: switching to a docke versionr >=18.06 in the vagrant box the image is built with no issues.
On my debian and ubuntu I have only recent docker versions, and the issue is not reproducible.

I think the vagrant file is not provisioning the project files

When the box is up and running you could issue a vagrant rsync from your host and check the files in the /vagrant folder within the box, i.e.

dcavestro@lxu-n4705:~/tmp/vagrant/jkube/vagrant-jkube-connection-reset-by-peer$ vagrant rsync
==> default: Rsyncing folder: /home/dcavestro/tmp/vagrant/jkube/vagrant-jkube-connection-reset-by-peer/ => /vagrant
dcavestro@lxu-n4705:~/tmp/vagrant/jkube/vagrant-jkube-connection-reset-by-peer$ vagrant ssh
Last login: Wed Jul 21 00:37:59 2021
[vagrant@localhost ~]$ cd /vagrant
[vagrant@localhost vagrant]$ tree
.
├── pom.xml
├── README.md
├── src
│   └── main
│       └── java
│           └── com
│               └── acme
│                   └── jkube
│                       └── TestApplication.java
└── Vagrantfile

6 directories, 4 files
[vagrant@localhost vagrant]$

@manusa
Copy link
Member

manusa commented Jul 21, 2021

When the box is up and running you could issue a vagrant rsync from your host and check the files in the /vagrant folder within the box, i.e

I'm running this on Windows, and it doesn't work very well :(

What I want to try is to generate the docker-build.tar in the vagrant provisioned machine, and then load that tar to the host machine's docker daemon. I think we might be building a Docker archive that's not compatible with those older Docker versions.

There are alternative docker packagings, compression available, this is something else I want to try.

@davidecavestro
Copy link
Contributor

Given a certain version of the jkube plugin, mvn and jdk, I guess the tar generated is the same both within the vagrant box or on the host.
So if you could install an "ancient" docker on your host maybe you could reproduce it as well.
Or even on a virtualbox or any other virtualized SO where you send that TAR to an old docker.

I'd say the issue arises from the golang tar library built within the docker binary, i.e. the Flush() function in https://golang.org/src/archive/tar/writer.go
Maybe the tar is not padded as it wants?

@manusa
Copy link
Member

manusa commented Jul 21, 2021

Given a certain version of the jkube plugin, mvn and jdk, I guess the tar generated is the same both within the vagrant box or on the host.

Yes, this is what I suspect. But there are other considerations such as how access to the Docker daemon is performed.

Anyway, I'll try something else to reproduce since the Vagrant option is not working very well.

Maybe the tar is not padded as it wants?

I'm not following 100% here. Internally we use commons-compress to generate the docker-build.tar archive (https://github.com/eclipse/jkube/blob/120ee60c1217b1bd24cbfbd4cd49ebf876e5f673/jkube-kit/common/src/main/java/org/eclipse/jkube/kit/common/archive/JKubeTarArchiver.java#L53).
Maybe you can debug this part and see if switching some of the options provides a compliant tar archive.

Is there some specific reason on why you are running older Docker versions? If updating them is not possible, have you tried using jib build strategy as a workaround (-Djkube.build.strategy=jib)?

@davidecavestro
Copy link
Contributor

Never tried with jib build strategy here. Great idea, I'll give it a try.
Updating Docker is the last resort, as this is an old CI server manually mantained by other folks so far, hence potentially plenty of pitfalls.
About commons-compress it would be interesting to compare the tar with one generated by tar cli (maybe I'll give it a try)

@davidecavestro
Copy link
Contributor

davidecavestro commented Jul 21, 2021

About jib strategy, it builds the image, but fails pushing it to the registry (it's an insecure registry sadly exposed only over HTTP)

java.net.SocketException: Network is unreachable (connect failed)

That is the same error I get if I issue a curl to the registry using https, while a curl over http or a docker push work, since the registry is listed as an insecure one in docker daemon config.

Any clue on how to push to an insecure registry using jib? (creds over http)
I've seen #336 but it is about pull.

@manusa
Copy link
Member

manusa commented Jul 21, 2021

Any clue on how to push to an insecure registry using jib? (creds over http)

:( Could you open a specific feature request issue for that 🙏

@davidecavestro
Copy link
Contributor

I have sketched #782

@davidecavestro
Copy link
Contributor

As a side note, the whole thing works perfectly with fabric8-maven-plugin:3.1.63.
Maybe that's a clue.

@rohanKanojia
Copy link
Member

@davidecavestro : that's quite old version of fabric8-maven-plugin. What about newer versions 4.4.1?

@davidecavestro
Copy link
Contributor

Yep, it works even with 4.4.1.

@manusa
Copy link
Member

manusa commented Jul 23, 2021

I think in FMP we relied on Maven to generate the tar file. We really need to see the difference of a tar file generated with one plugin and the other.
With two equivalent configurations (image model), the tar generated from FMP and JKube should be exactly the same.

@rohanKanojia
Copy link
Member

rohanKanojia commented Jul 23, 2021

@davidecavestro : I'm trying to reproduce your issue. I cloned https://github.com/davidecavestro/vagrant-jkube-connection-reset-by-peer adding fabric8-maven-plugin with the same configuration. But mvn fabric8:build is failing with this error:

[INFO] --- fabric8-maven-plugin:4.4.1:build (default-cli) @ test ---
[INFO] F8: Running in Kubernetes mode
[INFO] F8: Building Container image with Docker in Kubernetes mode
[INFO] Building tar: /home/rokumar/work/repos/vagrant-jkube-connection-reset-by-peer/target/docker/test/1.3.0/tmp/docker-build.tar
[INFO] F8: [test:1.3.0]: Created docker-build.tar in 21 milliseconds
[ERROR] F8: Failed to execute the build: io.fabric8.maven.docker.access.DockerAccessException: Unable to build image [test:1.3.0] : "COPY failed: file not found in build context or excluded by .dockerignore: stat maven: file does not exist" 
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  2.071 s
[INFO] Finished at: 2021-07-23T15:17:22+05:30
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal io.fabric8:fabric8-maven-plugin:4.4.1:build (default-cli) on project test: Failed to execute the build: io.fabric8.maven.docker.access.DockerAccessException: Unable to build image [test:1.3.0] : "COPY failed: file not found in build context or excluded by .dockerignore: stat maven: file does not exist"  -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

When you said it's working with 4.4.1, did you mean this reproducer project or some other project?

@davidecavestro
Copy link
Contributor

davidecavestro commented Jul 23, 2021

Yep, I meant the reproducer project, with the assembly portion modified to

<assembly>
    <descriptorRef>artifact</descriptorRef>
    <basedir>/opt/xyz/app</basedir>
</assembly>

@davidecavestro
Copy link
Contributor

davidecavestro commented Jul 23, 2021

This works with FMP, but introduces changes into the generated Dockerfile and how the temporary structure is packaged into the tar, hence the two tar don't share the same contents.
Maybe you know how to produce easily comparable tars... but it that case I think you can build/compare them in any environment.

OTOH the vagrant box is meant to give you an old docker that is not able to process the tar from jkube (but accepts the one from fabric8).

@rohanKanojia
Copy link
Member

@davidecavestro : Thanks a lot for sharing vagrantfile, I can reproduce your issue

@manusa manusa added this to the 1.4.0 milestone Jul 27, 2021
rohanKanojia added a commit to rohanKanojia/jkube that referenced this issue Jul 27, 2021
… Docker versions

Tar files generated during `k8s:build` were getting rejected from Docker
v1.12.1 (API version 1.24). Not setting size of TarArchiveEntry in case
of directories seem to fix this issue

Signed-off-by: Rohan Kumar <rohaan@redhat.com>
@rohanKanojia
Copy link
Member

I noticed that in case of jkube directory size was more than zero:

[vagrant@localhost vagrant]$ tar -tvf /tmp/jkube.tar 
drwxr-xr-x 0/0              22 2021-07-27 10:47 maven/
-rw-r--r-- 0/0              94 2021-07-27 10:47 Dockerfile
-rw-r--r-- 0/0        19337676 2021-07-27 10:47 maven/test.jar
[vagrant@localhost vagrant]$ tar -tvf /tmp/fabric8.tar 
drwxrwxr-x vagrant/vagrant   0 2021-07-27 11:28 maven/
-rw-rw-r-- vagrant/vagrant 19337676 2021-07-27 11:27 maven/test.jar
-rw-rw-r-- vagrant/vagrant       93 2021-07-27 11:28 Dockerfile

When I set TarArchiveEntry size only in case of files, I'm able to get jkube working with docker:
https://github.com/eclipse/jkube/blob/15bb59fbe2eb8c5f26e450ec7e4256bae19fe070/jkube-kit/common/src/main/java/org/eclipse/jkube/kit/common/archive/JKubeTarArchiver.java#L78

[vagrant@localhost vagrant-jkube-connection-reset-by-peer]$ mvn k8s:build
[INFO] Scanning for projects...
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Eclipse JKube :: Quickstarts :: Maven :: Dockerfile :: Simple 1.3.0
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- kubernetes-maven-plugin:1.4.0-SNAPSHOT:build (default-cli) @ test ---
[WARNING] Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
[WARNING] k8s: Cannot access cluster for detecting mode: Unknown host kubernetes.default.svc: Name or service not known
[WARNING] Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
[INFO] k8s: Running in Kubernetes mode
[INFO] k8s: Building Docker image in Kubernetes mode
[INFO] k8s: [test:1.3.0]: Created docker-build.tar in 160 milliseconds
[INFO] k8s: [test:1.3.0]: Built image sha256:d4fb4
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.483 s
[INFO] Finished at: 2021-07-27T14:23:10+02:00
[INFO] Final Memory: 31M/273M
[INFO] ------------------------------------------------------------------------
[vagrant@localhost vagrant-jkube-connection-reset-by-peer]$ docker run d4fb4

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

@manusa
Copy link
Member

manusa commented Jul 27, 2021

For future reference, also note that user/group id, name, etc. are not set in the TarArchiveEntry entities.

https://github.com/codehaus-plexus/plexus-archiver/blob/ad3f2f324271f9303be0d5511e2a9cab5130a840/src/main/java/org/codehaus/plexus/archiver/tar/TarArchiver.java#L296-L313

@rohanKanojia
Copy link
Member

Let me create a separate issue for this.

@pmbsa
Copy link

pmbsa commented Nov 23, 2021

ye, if I build a service that uses the jkube generated dockerfile it works fine. When I build a service that has a dockerfile of its own, everything seems to work ok until I add fileSets to the assembly (possibly pushing the tar over a certain size, I am not sure). If I just add a single file for example to the assembly I am ok.

@manusa
Copy link
Member

manusa commented Nov 23, 2021

Maybe there's something wrong with the assembly and the generated file descriptor.
It would be of great help if you could generate a reproducer project, since I think your issue is completely unrelated with this one (despite the same error message).

@pmbsa
Copy link

pmbsa commented Nov 23, 2021

That gets really hard for me to do, I have to sell my soul to the gods of banking before I am allowed to do stuff like that but I will try.
Its worth mentioning though that the structure in the target/docker folder is perfectly formed for the image. If I run "docker build" from the folder that has the generated Dockerfile I am able to build the image perfectly and stand up the container

@pmbsa
Copy link

pmbsa commented Nov 23, 2021

docker-file-provided.zip
Hi @manusa ,I have used one of your examples modified to recreate the problem (using all freely available stuff). The scenario is we build against a remote instance of docker (17.12 as I said). The onlydifference between the project in the zip and the one within my organsiation is the base image, I dont have access to the fabric8 base image and you of course wont have access to our base image. I know the base image is good though (if not a bit fat).

I have recreated the same project on my private laptop (docker version is 20.10, I cant easily regress that). The problem does not manifest there. I am not surprised though. Its local docker and newer.

I suspect this may be related to the size of the tar, if I choose a much smaller jar file in the assembly I dont get the problem.

@pmbsa
Copy link

pmbsa commented Feb 11, 2022

Hi @manusa, dont suppose there is any way you can find some to check this out for me? I just upgraded to the latest release in the vain hope I might get lucky but the problem persists. Its a real issue for me, I need to bring our organisation up to jkube but there is no way I can do that unless we can get our dockerfile based services working as well.

cheers
Paul

@manusa
Copy link
Member

manusa commented Feb 11, 2022

Sorry, it seems that I didn't get the notification for this issue.
I'll try to check it out

@pmbsa
Copy link

pmbsa commented Apr 12, 2022

hi @manusa, sorry to ping you again. Have you managed to look at this at all again?

@manusa
Copy link
Member

manusa commented Apr 13, 2022

Hi Paul, sorry still no time. I'll reopen the issue and try to allocate some time for us to investigate.

@manusa manusa reopened this Apr 13, 2022
@manusa manusa moved this to Planned in Eclipse JKube Apr 27, 2022
@pmbsa
Copy link

pmbsa commented May 25, 2022

Hi @manusa, sorry to bump this again. Any time to look at all?

@manusa
Copy link
Member

manusa commented May 25, 2022

Hi @pmbsa
It's in our sprint plan but haven't had the chance to investigate yet.
We're currently focused on releasing JKube (yesterday -> 2022-05-24), and Fabric8 Kubernetes Client. 😓

@rohanKanojia
Copy link
Member

rohanKanojia commented Jun 24, 2022

@pmbsa : I'm trying to reproduce your issue in Vagrant box from https://github.com/davidecavestro/vagrant-jkube-connection-reset-by-peer.

When I run mvn k8s:build in your project I'm seeing this error:

[ERROR] k8s: Failed to execute the build [Error while trying to build the image: Unable to build image [jkube/context-and-assembly] : Connection reset by peer]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  02:04 min
[INFO] Finished at: 2022-06-24T13:44:27+02:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.eclipse.jkube:kubernetes-maven-plugin:1.5.1:build (default-cli) on project docker-file-provided: Failed to execute the build: Error while trying to build the image: Unable to build image [jkube/context-and-assembly] : Connection reset by peer -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

Were you seeing the same issue with your project?

I can confirm for me it's failing in dockerfile mode only. If I add a custom configuration rather than using a dockerfile, I'm able to run k8s:build successfully.

@rohanKanojia rohanKanojia self-assigned this Jun 24, 2022
@rohanKanojia rohanKanojia moved this from Planned to In Progress in Eclipse JKube Jun 24, 2022
@pmbsa
Copy link

pmbsa commented Jun 24, 2022

Ye, thats exactly what we see. The docker host seems to reject the tar file or something to that effect.. we uave a fair few docker file based projects that need migrating so i would be super appreciative if you could find a solution.

rohanKanojia added a commit to rohanKanojia/jkube that referenced this issue Jun 24, 2022
…e to zero for directories (eclipse-jkube#777)

While fixing tarball issue in eclipse-jkube#793, we had added this
`tarEntry.setSize(0L)` in JKubeTarArchiver to fix tarball compatibility
with old docker daemons. However, it is set only when the file is not
present in `fileModeMap`.

In case of Dockerfile based builds, `fileModeMap` seems to contain file
modes for different files and directories. Hence, we're not able to set
TarEntry size to zero.

Move `tarEntry.setSize(0L)` out of fileMode related `if-else` block.
@rohanKanojia
Copy link
Member

@pmbsa : I think this is the same problem we fixed before. We had added this tarEntry.setSize(0L) to set directory tarEntry size to 0.

https://github.com/eclipse/jkube/blob/61345823be63c73cf334649e33e9c5146f9d5f7a/jkube-kit/common/src/main/java/org/eclipse/jkube/kit/common/archive/JKubeTarArchiver.java#L80-L85

However, in the case of Dockerfile mode directory seems to be present in fileModeMap, so we never get into else if block, and directory tarEntry size is not set.

Moving this out of filemode if-else block seems to fix problem on my vagrant set up:

         } else if (currentFile.isDirectory()) {
-          tarEntry.setSize(0L);
           tarEntry.setMode(TarArchiveEntry.DEFAULT_DIR_MODE);
         }
+        if (currentFile.isDirectory()) {
+          tarEntry.setSize(0L);
+        }

rohanKanojia added a commit to rohanKanojia/jkube that referenced this issue Jun 27, 2022
…e to zero for directories (eclipse-jkube#777)

While fixing tarball issue in eclipse-jkube#793, we had added this
`tarEntry.setSize(0L)` in JKubeTarArchiver to fix tarball compatibility
with old docker daemons. However, it is set only when the file is not
present in `fileModeMap`.

In case of Dockerfile based builds, `fileModeMap` seems to contain file
modes for different files and directories. Hence, we're not able to set
TarEntry size to zero.

Move `tarEntry.setSize(0L)` out of fileMode related `if-else` block.
@rohanKanojia rohanKanojia moved this from In Progress to Review in Eclipse JKube Jun 27, 2022
rohanKanojia added a commit to rohanKanojia/jkube that referenced this issue Jul 12, 2022
…e to zero for directories (eclipse-jkube#777)

While fixing tarball issue in eclipse-jkube#793, we had added this
`tarEntry.setSize(0L)` in JKubeTarArchiver to fix tarball compatibility
with old docker daemons. However, it is set only when the file is not
present in `fileModeMap`.

In case of Dockerfile based builds, `fileModeMap` seems to contain file
modes for different files and directories. Hence, we're not able to set
TarEntry size to zero.

Move `tarEntry.setSize(0L)` out of fileMode related `if-else` block.
@manusa manusa modified the milestones: 1.4.0, 1.9.0 Jul 12, 2022
manusa pushed a commit that referenced this issue Jul 12, 2022
…e to zero for directories (#777)

While fixing tarball issue in #793, we had added this
`tarEntry.setSize(0L)` in JKubeTarArchiver to fix tarball compatibility
with old docker daemons. However, it is set only when the file is not
present in `fileModeMap`.

In case of Dockerfile based builds, `fileModeMap` seems to contain file
modes for different files and directories. Hence, we're not able to set
TarEntry size to zero.

Move `tarEntry.setSize(0L)` out of fileMode related `if-else` block.
Repository owner moved this from Review to Done in Eclipse JKube Jul 12, 2022
baruKreddy pushed a commit to baruKreddy/jkube that referenced this issue Aug 11, 2022
…e to zero for directories (eclipse-jkube#777)

While fixing tarball issue in eclipse-jkube#793, we had added this
`tarEntry.setSize(0L)` in JKubeTarArchiver to fix tarball compatibility
with old docker daemons. However, it is set only when the file is not
present in `fileModeMap`.

In case of Dockerfile based builds, `fileModeMap` seems to contain file
modes for different files and directories. Hence, we're not able to set
TarEntry size to zero.

Move `tarEntry.setSize(0L)` out of fileMode related `if-else` block.
@pmbsa
Copy link

pmbsa commented Oct 11, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment