Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider using ArrayPool<byte> as underlying storage mechanism #314

Open
benmwatson opened this issue Nov 21, 2023 · 4 comments
Open

Consider using ArrayPool<byte> as underlying storage mechanism #314

benmwatson opened this issue Nov 21, 2023 · 4 comments
Labels

Comments

@benmwatson
Copy link
Member

Pros? Cons?

Would it simplify implementation?

@benmwatson benmwatson added this to the 3.0 milestone Nov 21, 2023
@sgorozco
Copy link

sgorozco commented Nov 21, 2023

Hi Ben!
I really like your current implementation A LOT. It is very readable, tailor-made for the use-case and very easy to reason about; with nice event hooks to observe the operations, and very useful debugging utilities.

With ArrayPool being an abstract class, things are a bit blurrier. The Shared implementation looks complex, with multi-core optimizations that might improve performance, but then, you would be relying on a singleton class that is heavily used internally by the runtime libraries (at least in the .Net Standard 2.0 libraries) and that might dwarf the optimizations. The non-shared implementation looks much simpler but is restricted to work internally with buffers that are sized in powers-of-two and last time I checked, protects concurrent access via a SpinLock; It may offer better performance, but personally I like more being able to directly inspect the code as in your current implementation.

Currently I am using the library to implement extremely efficient channel multiplexing. I am concurrently serializing hundreds of large DTOs to Recyclable streams and the data slicing to enable the multiplexing is completely handled by your library, with zero-copy, zero-allocation. According to the profiler, now the "bottleneck" is curiously, on the Math.DivRem() operations that are done to locate the buffer to place a byte! I am really thankful.

I'm curious about your own thoughts? What are the reasons why you consider such a change?

@benmwatson
Copy link
Member Author

It's been asked a couple of times, so I created this as a place for discussion.

@benmwatson benmwatson removed this from the 3.0 milestone Nov 21, 2023
@grbell-ms
Copy link
Member

Adding to what @sgorozco said:

  • ArrayPool<byte>.Shared only caches up to 32 arrays of the same size (not counting thread-locals). It would be extremely easy to exhaust/flood the shared array pool if RMS started using it.

  • ArrayPool<byte>.Create() returns a ConfigurableArrayPool<byte>, which can return buffers twice as large as desired, making it very easy to waste memory.

RMS would need to use its own array pool implementation, which wouldn't simplify anything. However, it may be worth it to open up an extension point to let users control allocation. If we used ArrayPool<byte> as the interface, RMS should throw if either of the default implementations are used, and document how the custom pool should behave.

@AnthonyLloyd
Copy link

AnthonyLloyd commented Feb 16, 2024

DOTNET_SYSTEM_BUFFERS_SHAREDARRAYPOOL_MAXARRAYSPERPARTITION can be used to set the number of arrays cached.

I think there should be a way to configure RMS to use the ArrayPool<byte>.Shared also.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants