-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add option for specifying data precision #168
Comments
Just to add, was discussed briefly with @adam2392 in #163 (comment) |
I'm in favor of supporting this. @larsoner do you have any opinions about how to expose this option? Some ideas off the cuff:
I'm sure there are other possibilities too... |
Just to give an idea of something I used for a package I wrote. I had a class that was initialised by default with double precision. The initialised object would be imported in any class involving computations (example) and used to specify
If you wanted to change the precision you could call some
Not saying it's the best method, but worked for me in the past. |
Global states usually end up being a pain. I would start small -- concretely where is this needed/useful? For example if it's in Then if this ends up being useful in a lot of places we can make nice ways of handling |
Describe the problem
When performing analysis with high sampling frequencies and with both time- and frequency-resolved connectivity (with the wavelet method), I frequently run in to memory (and time) limits, even when running on the high-performance clusters (with up to 200GB RAM).
This is of course also related to the fact that I work with high sampling frequencies and with bootstrapping methods, and certainly related to the fact that I use the multivariate methods like MVGC where large matrices have to be created for computation.
I have found that one part of the solution, next to reducing number of sampled frequencies and running analyses in sequence, was to explicitly use a lower-precision data type in the source code (e.g. np.complex64 instead of complex128 and np.float32 instead of float64).
The results did not change significantly for my analyses, but memory was often almost cut in half and computation time was also reduced.
Describe your solution
It would be awesome to have the option to reduce the precision of the calculations if desired (default would obviously remain the highest possible precision).
This could be for example be implemented in a function like
mne_connectivity.set_precision("full") # "half"
or alternatively be more specific, e.g.
I hope this is something that could be considered! Maybe @tsbinns you have some thoughts on this, or on potential implementation details?
The text was updated successfully, but these errors were encountered: