Skip to content

Commit

Permalink
fix some wording
Browse files Browse the repository at this point in the history
  • Loading branch information
samuelstjean committed Oct 22, 2017
1 parent e7c5ba1 commit 976d540
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
- PIESNO will now warn if less than 1% of noisy voxels were identified, which might indicate that something has gone wrong during the noise estimation.
- On python >= 3.4, --mp_method [a_valid_start_method](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) can now be used to control behavior in the multiprocessing loop.
- A new option --split_b0s can be specified to split the b0s equally amongst the training data.
- A new (kind of experimental) option --use_f32 can be specified to use the float32 mode of spams.
- A new (kind of experimental) option --use_f32 can be specified to use the float32 mode of spams and reduce ram usage.
- A new option --use_threading can be specified to disable python multiprocessing and solely rely on threading capabilities of the linear algebra libs during denoising.
- Fixed crash in option --noise_est local_std when --cores 1 was also supplied.
- setup.py and requirements.txt will now fetch spams v2.6, with patches for numpy 1.12 support.
Expand Down
8 changes: 4 additions & 4 deletions nlsam/denoiser.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,13 +71,13 @@ def nlsam_denoise(data, sigma, bvals, bvecs, block_size,
b0_threshold : int, default 10
A b-value below b0_threshold will be considered as a b0 image.
dtype : np.float32 or np.float64, default np.float64
Precision to use for inner computation. Note that np.float32 should only be used for
very, very large datasets (that is, you ram starts swappping) as it can lead to numerical precision errors.
Precision to use for inner computations. Note that np.float32 should only be used for
very, very large datasets (that is, your ram starts swappping) as it can lead to numerical precision errors.
use_threading : bool, default False
Do not use multiprocessing, but rather rely on the multithreading capabilities of your numerical solvers.
While this mode is more memory friendly, it is undoubtedly slower than using the multiprocessing mode (the default).
While this mode is more memory friendly, it is also slower than using the multiprocessing mode (the default).
Moreover, it also assumes that your blas/lapack/spams library are built with multithreading, so be sure to check
the resources usage of your computer to make sure it is the case or the algorithm will just take much longer to complete.
that your computer is using multiple cores or the algorithm will just take much longer to complete.
verbose : bool, default False
print useful messages.
mp_method : string
Expand Down

0 comments on commit 976d540

Please sign in to comment.