Skip to content

Commit

Permalink
tag v0.7
Browse files Browse the repository at this point in the history
  • Loading branch information
samuelstjean committed May 20, 2023
1 parent e2b1264 commit 65f7f43
Show file tree
Hide file tree
Showing 6 changed files with 14 additions and 16 deletions.
4 changes: 2 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Changelog

## [0.7] Development version
## [0.7] - 2023-05-20

- **Breaking changes in the command line parser**
- The previously required options __N__ and __angular_block_size__ are now optional.
Expand All @@ -24,7 +24,7 @@
- New online documentation available at http://nlsam.readthedocs.io/ for the current (and future) versions.
- The dictionary learning part of the algorithm now respects **--cores** instead of ignoring it and always using all available processors.
- joblib is now used for parallel processing.
- For now it means we lose the frozen executable until they fix it.
- The frozen executable is now using dask and performs a bit slower than the normal version until joblib.loky is fixed to work with pyinstaller.
- Binary wheels are now available for all platforms instead.
- A new option to estimate automatically the noise distribution (sigma and N) is now available by passing **auto** to both N and **--noise_est**.
- This option is also the new default now.
Expand Down
6 changes: 3 additions & 3 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,17 +51,17 @@

# General information about the project.
project = 'NLSAM'
copyright = '2020, Samuel St-Jean'
copyright = '2023, Samuel St-Jean'
author = 'Samuel St-Jean'

# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.6.1'
version = '0.7'
# The full version, including alpha/beta/rc tags.
release = '0.6.1'
release = '0.7'

# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
Expand Down
6 changes: 3 additions & 3 deletions example/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,16 +47,16 @@ There are 4 required command line inputs (their order is important) and a switch

+ The input dataset (dwi.nii.gz)
+ The output dataset (dwi_nlsam.nii.gz)
+ The b-values file for our input dataset (bvals)
+ The b-vectors file for our input dataset (bvecs)
+ The bvalues file for our input dataset (bvals)
+ The bvectors file for our input dataset (bvecs)
+ -m or --mask mask.nii.gz for the brain mask

The bvals/bvecs files are needed for identifying the angular neighbors and we need to choose how many we want to denoise at once (the default is now 5).

Using a larger number could mean more blurring if we mix q-space points which are too far part.

For a multishell acquisition, only the direction (as opposed to the norm)
of the b-vector is taken into account, so you can freely mix dwi from different
of the bvector is taken into account, so you can freely mix dwi from different
shells to favor picking radial decay in the denoising.

There is also a new option called **--split_shell** to process each shell by itself separately and **--split_b0s** to process the b0s separately in each block.
Expand Down
4 changes: 2 additions & 2 deletions nlsam/smoothing.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ def sh_smooth(data, bvals, bvecs, sh_order=4, b0_threshold=1.0, similarity_thres
sh_order : int, default 8
Order of the spherical harmonics to fit.
similarity_threshold : int, default 50
All b-values such that |b_1 - b_2| < similarity_threshold
All bvalues such that |b_1 - b_2| < similarity_threshold
will be considered as identical for smoothing purpose.
Must be lower than 200.
regul : float, default 0.006
Expand Down Expand Up @@ -56,7 +56,7 @@ def sh_smooth(data, bvals, bvecs, sh_order=4, b0_threshold=1.0, similarity_thres
idx = np.abs(unique_bval - bvals) < similarity_threshold
rounded_bvals[idx] = unique_bval

# process each b-value separately
# process each bvalue separately
for unique_bval in np.unique(rounded_bvals):
idx = rounded_bvals == unique_bval

Expand Down
6 changes: 2 additions & 4 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,8 @@ build-backend = "setuptools.build_meta"

[project]
name = "nlsam"
version = '0.6.1'
authors = [
{name = "Samuel St-Jean"},
]
version = '0.7'
authors = [{name = "Samuel St-Jean"}]
description='Implementation of "Non Local Spatial and Angular Matching : Enabling higher spatial resolution diffusion MRI datasets through adaptive denoising"'
readme = "README.md"
requires-python = ">=3.7"
Expand Down
4 changes: 2 additions & 2 deletions scripts/nlsam_denoising
Original file line number Diff line number Diff line change
Expand Up @@ -86,14 +86,14 @@ def buildArgsParser():
'This is now a required input to prevent sampling (and reconstructing) background noise instead of the data.')

optionals.add_argument('--b0_threshold', metavar='int', default=10, type=int,
help='Lowest b-value to be considered as a b0. Default 10')
help='Lowest bvalue to be considered as a b0. Default 10')

optionals.add_argument('--split_b0s', action='store_true',
help='If set and multiple b0s are present, they are split amongst the '
'training data.')

optionals.add_argument('--split_shell', action='store_true',
help='If set, each shell/b-value is processed separately by itself.')
help='If set, each shell/bvalue is processed separately by itself.')

optionals.add_argument('--bval_threshold', metavar='int', default=25, type=int,
help='Any bvalue within += bval_threshold of each others will be considered on the same shell (e.g. b=990 and b=1000 are on the same shell). Default 25')
Expand Down

0 comments on commit 65f7f43

Please sign in to comment.