Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Performance bottleneck for large files #59

Open
lfourquaux opened this issue Jun 14, 2020 · 4 comments
Open

[Question] Performance bottleneck for large files #59

lfourquaux opened this issue Jun 14, 2020 · 4 comments

Comments

@lfourquaux
Copy link
Contributor

Where is the performance bottleneck for reading large files through diod?

Here is a micro-benchmark:
Linux v9fs: 300Mb/s
diodcat (default msize): 400Mb/s
direct tcp (netcat): 900Mb/s
This is for reading a 1GB file through a 1Gb/s Ethernet connection (latency 0.4ms, mtu 9000).

Why is diod that much slower at transferring data than netcat? Would it be possible to increase the max msize on the server and would it help?

@garlick
Copy link
Member

garlick commented Jun 15, 2020

I'm not sure. As I recall when I was really working on performance, I used a 64K msize. That was a long time ago.

@lfourquaux
Copy link
Contributor Author

I tried to change the hardcoded msize=64K in diod/ops.c (to msize=1M). Hopefully there is not hidden assumption elsewhere that msize is not too large. (Maybe it could be made configurable from the command line?)
Using diodcat (also msize=1M), I get close to the raw tcp throughput. On the other hand, v9fs limits msize to 64K for fd transport (see MAX_SOCK_BUF in net/9p/trans_fd.c) so it's not very useful. Maybe should suggest changing MAX_SOCK_BUF (to 2M, maybe)?

@garlick
Copy link
Member

garlick commented Jun 16, 2020

I vaguely remember that v9fs or the 9p transport requires the allocation of msize+overhead slab buffers, and so the current max msize may be the largest that can be handled practically. Maybe that is no longer an issue though - the kernel has changed a lot since I last studied this (plus my memory is not great).

@nkichukov
Copy link
Contributor

This will be fixed by: #67
and running a 5.15 kernel (I am not sure if the kernel change is a hard prerequisite but my tests were only done with a patched kernel, thus MAX_SOCK_BUF set to 1MB).

I have backported the kernel changes mentioned in the pull request to a 5.14.10 and also backported the patch to diod version 1.0.24 and I have confirmed the setup works stable and the throughput is much faster when transferring big files instead of the previous maximum msize of 64KB.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants