You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When handling two-level hierarchies, it's often useful to limit parallelism.
Here's what I often do, in pseudocode:
level1 = ids.map (|id| ctx.read (id)) .collect() // Load the first level in parallel.
for (idx in level1) {
level2 = idx.map (|id| ctx.read (id)) .collect() // Load the second level in parallel.
...
}
Now, I don't need the entirety of the first level loaded when I'm handling the second level. In fact, loading the first level might be seen as inefficient in terms of memory use and ordering of the AIO operations. What I need is prefetching some (maybe just a single entry) of the first level in parallel while focusing on the second level.
I wonder what the proper/best API for this might look like.
P.S. There seems to be some explicit parallelism adaptors already. On one hand, join_all, which runs all futures in parallel, on another hand, buffer_unordered, which runs a fixed chunk of the futures in parallel. We need something in between, an adapter that eventually runs all the futures, but not more than a couple of them at a time.
P.S. Another use case, in pseudocode:
let oids = [...];
let stats = parallelize (99, oid.map (ctx.stat (oid)));
for bytes in parallelize (9, stats.map (ctx.read (oid, size))) {...}
We often have a collection that we want to generate futures from, but we don't want to generate all the futures immediately (that makes the app pause and eats RAM), so there needs to be some kind of a buffer that applies a map in advance but only for a limited number of entries. This seems to be entirely generic, e.g. we take a simple Iterator and produce a different Iterator that buffers a fixed number of entries, that's all.
The text was updated successfully, but these errors were encountered:
When handling two-level hierarchies, it's often useful to limit parallelism.
Here's what I often do, in pseudocode:
Now, I don't need the entirety of the first level loaded when I'm handling the second level. In fact, loading the first level might be seen as inefficient in terms of memory use and ordering of the AIO operations. What I need is prefetching some (maybe just a single entry) of the first level in parallel while focusing on the second level.
I wonder what the proper/best API for this might look like.
P.S. There seems to be some explicit parallelism adaptors already. On one hand, join_all, which runs all futures in parallel, on another hand, buffer_unordered, which runs a fixed chunk of the futures in parallel. We need something in between, an adapter that eventually runs all the futures, but not more than a couple of them at a time.
P.S. Another use case, in pseudocode:
We often have a collection that we want to generate futures from, but we don't want to generate all the futures immediately (that makes the app pause and eats RAM), so there needs to be some kind of a buffer that applies a
map
in advance but only for a limited number of entries. This seems to be entirely generic, e.g. we take a simpleIterator
and produce a differentIterator
that buffers a fixed number of entries, that's all.The text was updated successfully, but these errors were encountered: