-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Performance bottleneck for large files #59
Comments
I'm not sure. As I recall when I was really working on performance, I used a 64K msize. That was a long time ago. |
I tried to change the hardcoded msize=64K in diod/ops.c (to msize=1M). Hopefully there is not hidden assumption elsewhere that msize is not too large. (Maybe it could be made configurable from the command line?) |
I vaguely remember that v9fs or the 9p transport requires the allocation of msize+overhead slab buffers, and so the current max msize may be the largest that can be handled practically. Maybe that is no longer an issue though - the kernel has changed a lot since I last studied this (plus my memory is not great). |
This will be fixed by: #67 I have backported the kernel changes mentioned in the pull request to a 5.14.10 and also backported the patch to diod version 1.0.24 and I have confirmed the setup works stable and the throughput is much faster when transferring big files instead of the previous maximum msize of 64KB. |
Where is the performance bottleneck for reading large files through diod?
Here is a micro-benchmark:
Linux v9fs: 300Mb/s
diodcat (default msize): 400Mb/s
direct tcp (netcat): 900Mb/s
This is for reading a 1GB file through a 1Gb/s Ethernet connection (latency 0.4ms, mtu 9000).
Why is diod that much slower at transferring data than netcat? Would it be possible to increase the max msize on the server and would it help?
The text was updated successfully, but these errors were encountered: