-
Notifications
You must be signed in to change notification settings - Fork 237
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multipart temp files are being held open by s3proxy after successful upload leading to .fuse_hidden files #729
Comments
How are you using S3Proxy? Using the filesystem storage backend with an NFS directory? |
s3fs but i guess it does something similar to NFS with temp files. -rw-r----- 1 root root 0 Nov 25 03:30 .fuse_hidden0000004400000038 |
Coincidentally I work on s3fs as well but I don't understand how you have configured your system. Does s3fs connect to S3Proxy via S3 or does S3Proxy use s3fs via the filesystem provider? It is certainly possible that some |
Cool. Yes it's using jclouds to a s3fs mount so that would make sense. Idk I feel like I'm creating extra work for you here. Is that the easiest way to stop the hidden files or debug it? It's too bad s3fs "hard_remove" option didn't help here. |
I could try using an NFS mount and rsyncing instead for this app. Not sure if NFS temp files will have issues too. Seems like previous s3proxy releases weren't doing this tho. I'll let you know if i find anything out |
Sorry I haven't looked into this yet. It isn't a waste of time and could be a more serious issue if many open file handles accumulate. I have just committed a new storage backend |
Tried it, seemed to be creating temp files with uuids... then this happened. [706752.067790] s3fs[21504]: segfault at 0 ip 00005645f790f1d0 sp 00007f8af4ff8770 error 4 in s3fs[5645f7907000+8b000] Total death. |
This looks like some kind of memory corruption. Are there any more logs from S3Proxy or s3fs you could share? But attaching |
Right, i'm not sure when i'll get time to mess with it more but will report back when I can |
I tested this with NFS and it had a similar problem: Everything looked great until I tried to delete the bucket directory. Looks like s3proxy is holding onto temp files. Stopping the s3p container lets me delete the files. |
2024-12-09 16:18 52428800 s3://test/.nfs000000000000020d00000034 Same thing is happening with s3proxy container from 6 and 9 months ago too. hmmm. stopping the container causes those files to disappear. Why does it seem like this wasn't happening in the past. Do I have a config issue |
andrewgaul/s3proxy:sha-8165be6 doesn't appear to have this behavior... it is almost 1 year old though. Yah, zero temp files left over - seems like this is a bug introduced sometime after that release. I will also test s3fs with that release but I have a feeling temp files will be gone since I had this working at one point. So now the question is what was changed in s3proxy that is keeping files open needlessly. |
fuse temp files are also not an issue in s3proxy from 11 months ago. they are removed once the put completes and parts are assembled etc |
apache/jclouds@b7f28f1 introduced a major change in behavior which may be something to look at. filesystem-nio2 uses the same logic. |
That makes sense, that's probably it. So.. I guess there should be an option to disable that feature for network storage like nfs and s3fuse? Unless there can be a definite and clear timeout where files will be closed and wont have the files hanging out very long. The ability to hide certain temp/hidden files when listing buckets may also be a good idea. |
It really isn't helpful to look at older versions of code or add multiple code paths to have one set of buggy behavior or a second set. I tried to reproduce your symptoms using filesystem-nio2 but was not able to. So you will need to provide a better self-contained test or we can wait for someone else to do so. |
You used the latest docker image and used an NFS mounted directory? I tried Ubuntu 22.04 and 24.04 , NFSd from TrueNAS both a very old and very new one. It's a multipart upload of 1GB or more. I will test with smaller in case. There is clearly something up here though. My initial proposal of reverting your previous commit was just a quick way to make it function again if indeed that was it. If I want to use s3proxy going forward I may just need to do that myself until others chime in here. What I think happened is docker pull s3proxy:latest was never updated so most people are running the 11 month old version. Now that it is we may get others show up with a similar issue. Update: It happens with even 320kbyte files. hidden temp files remain after upload, stopping the docker container clears them all. Possibly when compiling this without docker it doesn't have this issue? Not sure why you couldn't recreate. It also doesn't matter the s3 client used btw. |
Maybe a new jclouds provider called nas instead of the filesystem provider could be used that will behave differently to support NFS and s3fuse mounted directories? ..with its initial code as simply the filesystem code from 11 months ago to start just to have something non-broken. Assuming its actually jclouds causing this behavior and not something else in s3proxy. obv |
No. I don't think you realize the scope of what you are asking for or the incompleteness of your reported issue but it might be best to just use some other software at this point. |
I'm trying to report bugs here to help you too. Clearly there is a bug when attempting to use the latest docker images with NFS/s3fs mounted directories accessed with the jclouds filesystem provider. You don't need to support using it with NFS/s3fs on the backend, you can keep it as is, it has other uses. It's like you're arguing that there is no issue when every combination I have tried results in the same temp files when previous versions do not. It really doesn't need to be any more complicated than that, I'm not trying to bother you here though so i'll shutup now. |
What I'm seeing is the initial .nfs temp files that cannot be removed with "rm" , however, in not much time (minutes), they can then be deleted. My understanding is once the app releases the lock on the file/closes it, the file should be deleted. Instead it isn't. Why this happens on newer s3proxy and not older ones IDK. My temp solution is to just run a cronjob every minute to delete .nfs temp files with a simple "rm" so they get cleaned up quickly. Seems to work good enough. I did not try this to a s3fs mounted directory on the host system using jclouds yet for removing the fuse temp files. |
fyi - the cronjob hack method does not work with s3fs - .fuse_hidden files are endlessly recreated until the container is restarted. After restart they are all cleared. |
Here's an example of what seems like is happening:
https://forum.rclone.org/t/s3-upload-file-introduces-fuse-hidden-file-and-s3-go-remove-method-is-called-immediately/24854
So because fuse thinks the file is still open, it doesn't fully delete it, leading ti .fuse_hidden files that are visible with an s3 ls
I tried the "hard_remove" option but it isn't helping. I didn't have this behavior with versions before 5 months ago and the uploads seem to complete fine. I will try to debug when I get the time.
The text was updated successfully, but these errors were encountered: