You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've always been thinking in terms of "more memory is always better because lower FPR". This is true, but there's evidence, at least for some computations, that runtime scales with memory consumption. So there's an optimization to be made here: what is the smallest amount of memory I can request that won't lead to big problems with FPR?
If this observation holds and if @ctb and I aren't completely of our rockers, we should actually get a speedup when we parallelize (as in #15). If we run all the parallel processes at once, then the total memory consumption will still be the same, but we would have the option to split it up.
The text was updated successfully, but these errors were encountered:
I've always been thinking in terms of "more memory is always better because lower FPR". This is true, but there's evidence, at least for some computations, that runtime scales with memory consumption. So there's an optimization to be made here: what is the smallest amount of memory I can request that won't lead to big problems with FPR?
If this observation holds and if @ctb and I aren't completely of our rockers, we should actually get a speedup when we parallelize (as in #15). If we run all the parallel processes at once, then the total memory consumption will still be the same, but we would have the option to split it up.
The text was updated successfully, but these errors were encountered: