Skip to content

Commit 6e6cb22

Browse files
authored
Code Base Update (#421)
* rename ndmg to m2g * strip down requirements * update to run on python3 not on specific version * add lecture * reorg tutorials * update readme * Update docs * add disc data * Figure 3 * Update * black * figure 4 * Figure 4 data * Header for figure 4 * runner * add description * Change depth * Change title * finish diffusion * update pipelines * Update * finish pipeline * remove files * add * Try new dockerfile * Remove unused imports * update * update * update * update * update * Add setup.cfg for update compatibility * Remove dockerfile
1 parent 59f9519 commit 6e6cb22

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

60 files changed

+46796
-1042
lines changed

README.md

+23-380
Large diffs are not rendered by default.

docs/Makefile

-19
This file was deleted.

docs/README.md

Whitespace-only changes.

docs/_static/diff_mapped_atlas.png

94.4 KB
Loading

docs/_static/diff_skullstip.png

214 KB
Loading

docs/_static/func_motion_plot.png

52.9 KB
Loading

docs/_static/func_skullstrip.png

219 KB
Loading

docs/_static/m2g_pipeline.png

1.39 MB
Loading

docs/_static/qa-d/connectome.png

39.9 KB
Loading

docs/_static/qa-d/skullstrip.png

214 KB
Loading

docs/_static/qa-d/tractography.png

129 KB
Loading
52.9 KB
Loading

docs/_static/qa-f/func_skullstrip.png

219 KB
Loading

docs/conf.py

+3
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,9 @@
5353
# "toctree_filter",
5454
]
5555

56+
# nbxphinx
57+
nbsphinx_execute = "never"
58+
5659
# -- numpydoc
5760
# Below is needed to prevent errors
5861
numpydoc_show_class_members = False

docs/diffusion.rst

+132-27
Large diffs are not rendered by default.

docs/docker.rst

+39
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
.. include:: links.rst
2+
3+
4+
Docker
5+
------
6+
7+
You can learn more about Docker and how to install it in the `Docker installation`_ documentation.
8+
Please make sure you follow the Docker installation instructions. You can check your Docker Runtime installation running their ``hello-world`` image:
9+
10+
.. code-block:: bash
11+
12+
$ docker run --rm hello-world
13+
14+
If you have a functional installation, then you should obtain the following output:
15+
16+
.. code-block::
17+
18+
Hello from Docker!
19+
This message shows that your installation appears to be working correctly.
20+
21+
To generate this message, Docker took the following steps:
22+
1. The Docker client contacted the Docker daemon.
23+
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
24+
(amd64)
25+
3. The Docker daemon created a new container from that image which runs the
26+
executable that produces the output you are currently reading.
27+
4. The Docker daemon streamed that output to the Docker client, which sent it
28+
to your terminal.
29+
30+
To try something more ambitious, you can run an Ubuntu container with:
31+
$ docker run -it ubuntu bash
32+
33+
Share images, automate workflows, and more with a free Docker ID:
34+
https://hub.docker.com/
35+
36+
For more examples and ideas, visit:
37+
https://docs.docker.com/get-started/
38+
39+
After checking your Docker Engine is capable of running Docker images, you are ready to pull your ``m2g`` container image.

docs/functional.rst

+226-1
Large diffs are not rendered by default.

docs/index.rst

+4-3
Original file line numberDiff line numberDiff line change
@@ -20,17 +20,18 @@ Join us on `GitHub <https://github.com/neurodata/m2g>`_.
2020

2121
.. toctree::
2222
:caption: Documentation
23-
:maxdepth: 3
23+
:maxdepth: 1
2424

2525
install
26+
usage
2627
diffusion
2728
functional
29+
paper/index
2830

2931
.. toctree::
3032
:maxdepth: 1
3133
:caption: Useful Links
3234

3335
m2g @ GitHub <http://www.github.com/neurodata/m2g/>
3436
m2g @ PyPI <https://pypi.org/project/m2g/>
35-
Issue Tracker <https://github.com/neurodata/m2g/issues>
36-
NeuroData Lab <http://neurodata.io/>
37+
m2g lecture <https://neurodata.io/talks/ndmg.pdf>

docs/install.rst

+82-150
Original file line numberDiff line numberDiff line change
@@ -1,177 +1,109 @@
1-
******************
2-
Install
3-
******************
1+
.. include:: links.rst
2+
3+
------------
4+
Installation
5+
------------
46

57
.. contents:: Table of Contents
68

9+
There are two ways to install **m2g**:
710

8-
Summary
9-
===================
11+
* using container technologies - `Docker Container`_ - (RECOMMENDED) or
12+
* within a Manually Prepared Environment (For ``m2g-d`` pipeline).
1013

11-
Pull docker container::
14+
However, the manually prepared environment is not recommended, as it is more complex and error-prone.
1215

13-
docker pull neurodata/m2g
1416

15-
Run dmri participant pipeline::
17+
Docker Container
18+
================================================
19+
**m2g** is distributed as a Docker image, which is the recommended way to run the pipeline. See :doc:`Docker <./docker>`.
1620

17-
docker run -ti -v /path/to/local/data:/data neurodata/m2g /data/ /data/outputs
21+
The Docker image contains all the necessary software dependencies, and the pipeline is executed in a containerized environment.
22+
This ensures that the pipeline runs in a consistent environment, regardless of the host system.
1823

19-
System Requirements
20-
====================
21-
.. TODO: update package versions
24+
The most recent docker image can be pulled using::
25+
26+
$ docker pull neurodata/m2g:latest
27+
28+
The image can then be used to create a container and run directly with the following command (and any additional options you may require for Docker, such as volume mounting)::
29+
30+
$ docker run -ti --entrypoint /bin/bash neurodata/m2g:latest
31+
32+
**m2g** docker containers can also be made from m2g's Dockerfile::
33+
34+
$ git clone https://github.com/neurodata/m2g.git
35+
$ cd m2g
36+
$ docker build -t <imagename:uniquelabel> .
37+
38+
Where "uniquelabel" can be whatever you wish to call this Docker image (for example, `m2g:latest`).
39+
Additional information about building Docker images can be found `here <https://docs.docker.com/engine/reference/commandline/image_build/>`_.
40+
Creating the Docker image should take several minutes if this is the first time you have used this docker file.
41+
In order to create a docker container from the docker image and access it, use the following command to both create and enter the container::
2242

23-
The m2g pipeline was developed and tested primarily on Mac OSX, Ubuntu (16, 18, 20, 22), and CentOS (5, 6);
43+
$ docker run -it --entrypoint /bin/bash m2g:uniquelabel
2444

25-
Made to work on Python 3.8;
45+
Manually Prepared Environment (For ``m2g-d`` pipeline)
46+
=======================================================
2647

27-
Is wrapped in a Docker container;
48+
.. warning::
2849

29-
Has install instructions via a Dockerfile;
50+
This method is not recommended! Please consider using containers.
3051

31-
Requires no non-standard hardware to run;
52+
.. warning::
3253

33-
Has key features built upon FSL, Dipy, Nibabel, Nilearn, Networkx, Numpy, Scipy, Scikit-Learn, and others;
54+
Without Docker, you can only run ``m2g-d`` portion of the pipeline. ``m2g-f`` requires CPAC, which also runs
55+
on a Docker container.
3456

35-
Takes approximately 1-core, 8-GB of RAM, and 1 hour to run for most datasets.
36-
37-
While m2g is quite robust to Python package versions (with only few exceptions, mentioned in the installation guide), an example of possible versions (taken from the m2g Docker Image with version v0.3.0) is shown below. Note: this list excludes many libraries which are standard with a Python distribution, and a complete list with all packages and versions can be produced by running pip freeze within the Docker container mentioned above. ::
38-
39-
awscli==1.16.210 , boto3==1.9.200 , botocore==1.12.200 , colorama==0.3.9 , configparser>=3.7.4 ,
40-
Cython==0.29.13 , dipy==0.16.0 , duecredit==0.7.0 , fury==0.3.0 , graspy==0.0.3 , ipython==7.7.0 ,
41-
matplotlib==3.1.1 , networkx==2.3 , nibabel==2.5.0 , nilearn==0.5.2 , numpy==1.17.0 , pandas==0.25.0,
42-
Pillow==6.1.0 , plotly==1.12.9, pybids==0.6.4 , python-dateutil==2.8.0 , PyVTK==0.5.18 ,
43-
requests==2.22.0 , s3transfer==0.2.1 , setuptools>=40.0 scikit-image==0.13.0 , scikit-learn==0.21.3 ,
44-
scipy==1.3.0 , sklearn==8.0 , vtk==8.1.2
45-
46-
Installation Guide
47-
==================
48-
.. TODO: add links to external packages
49-
50-
Currently, the Docker image is recommended.
51-
52-
``pip`` and Github installations are also available.
53-
54-
Docker
55-
--------------
56-
.. _Dockerhub : https://hub.docker.com/r/neurodata/m2g/
57-
.. _documentation : https://docs.docker.com/
58-
59-
The neurodata/m3r-release Docker container enables users to run end-to-end connectome estimation on structural MRI or functional MRI right from container launch. The pipeline requires that data be organized in accordance with the BIDS spec. If the data you wish to process is available on S3 you simply need to provide your s3 credentials at build time and the pipeline will auto-retrieve your data for processing.
60-
61-
If you have never used Docker before, it is useful to run through the Docker documentation_.
62-
63-
**Getting Docker container**::
64-
65-
$ docker pull neurodata/m2g
6657

67-
*(A) I do not wish to use S3:*
58+
Make sure all of **m2g**'s `External Dependencies`_ are installed.
59+
These tools must be installed and their binaries available in the
60+
system's ``$PATH``.
61+
A relatively interpretable description of how your environment can be set-up
62+
is found in the `Dockerfile <https://github.com/neurodata/m2g/blob/deploy/Dockerfile>`_.
6863

69-
You are good to go!
64+
On a functional Python 3.8 (or above) environment with ``pip`` installed,
65+
**m2g** can be installed using the habitual command ::
7066

71-
*(B) I wish to use S3:*
67+
$ python -m pip install m2g
7268

73-
Add your secret key/access id to a file called credentials.csv in this directory on your local machine. A dummy file has been provided to make the format we expect clear. (This is how AWS provides credentials)
69+
Check your installation with the ``--version`` argument ::
70+
71+
$ m2g --version
72+
73+
74+
External Dependencies
75+
---------------------
76+
77+
**m2g** requires other neuroimaging software that are not handled by the Python's packaging system (Pypi):
78+
79+
- FSL_ (version 6.0.6.5)
80+
- ANTs_ (version 2.4.3)
81+
- AFNI_ (version 23.3.09)
82+
- `C3D <https://sourceforge.net/projects/c3d/>`_ (version 1.3.0)
83+
84+
85+
Requirements
86+
====================
7487

75-
**Processing Data**
88+
Hardware Requirements
89+
---------------------
7690

77-
Below is the help output generated by running **m2g** with the ``-h`` command. All parameters are explained in this output. ::
91+
The pipeline only requires 1-core and 16-GB of RAM, and takes approximately 1 hour to run for most datasets.
7892

79-
$ docker run -ti neurodata/m2g -h
8093

81-
usage: m2g_bids [-h]
82-
[--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
83-
[--session_label SESSION_LABEL [SESSION_LABEL ...]]
84-
[--run_label RUN_LABEL [RUN_LABEL ...]] [--bucket BUCKET]
85-
[--remote_path REMOTE_PATH] [--push_data] [--dataset DATASET]
86-
[--atlas ATLAS] [--debug] [--sked] [--skreg] [--vox VOX] [-c]
87-
[--mod MOD] [--tt TT] [--mf MF] [--sp SP] [--seeds SEEDS]
88-
[--modif MODIF]
89-
bids_dir output_dir
94+
Python Requirements
95+
-------------------
9096

91-
This is an end-to-end connectome estimation pipeline from M3r Images.
97+
The m2g pipeline was developed and tested for Python <=3.8 and <=3.10 on linux systems, such as Ubuntu, CentOS and macOS.
98+
With `Docker execution`_, **m2g** can run on almost all systems that support Docker.
9299

93-
positional arguments:
94-
bids_dir The directory with the input dataset formatted
95-
according to the BIDS standard.
96-
output_dir The directory where the output files should be stored.
97-
If you are running group level analysis this folder
98-
should be prepopulated with the results of the
99-
participant level analysis.
100+
While m2g is quite robust to Python package versions (with only few exceptions, mentioned in the installation guide), an example of possible versions (taken from the m2g Docker Image with version v0.3.0) is shown below. ::
100101

101-
optional arguments:
102-
-h, --help show this help message and exit
103-
--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]
104-
The label(s) of the participant(s) that should be
105-
analyzed. The label corresponds to
106-
sub-<participant_label> from the BIDS spec (so it does
107-
not include "sub-"). If this parameter is not provided
108-
all subjects should be analyzed. Multiple participants
109-
can be specified with a space separated list.
110-
--session_label SESSION_LABEL [SESSION_LABEL ...]
111-
The label(s) of the session that should be analyzed.
112-
The label corresponds to ses-<participant_label> from
113-
the BIDS spec (so it does not include "ses-"). If this
114-
parameter is not provided all sessions should be
115-
analyzed. Multiple sessions can be specified with a
116-
space separated list.
117-
--run_label RUN_LABEL [RUN_LABEL ...]
118-
The label(s) of the run that should be analyzed. The
119-
label corresponds to run-<run_label> from the BIDS
120-
spec (so it does not include "task-"). If this
121-
parameter is not provided all runs should be analyzed.
122-
Multiple runs can be specified with a space separated
123-
list.
124-
--bucket BUCKET The name of an S3 bucket which holds BIDS organized
125-
data. You must have built your bucket with credentials
126-
to the S3 bucket you wish to access.
127-
--remote_path REMOTE_PATH
128-
The path to the data on your S3 bucket. The data will
129-
be downloaded to the provided bids_dir on your
130-
machine.
131-
--push_data flag to push derivatives back up to S3.
132-
--dataset DATASET The name of the dataset you are perfoming QC on.
133-
--atlas ATLAS The atlas being analyzed in QC (if you only want one).
134-
--debug If False, remove any old files in the output
135-
directory.
136-
--sked Whether to skip eddy correction if it has already been
137-
run.
138-
--skreg whether or not to skip registration
139-
--vox VOX Voxel size to use for template registrations (e.g.
140-
default is '2mm')
141-
-c, --clean Whether or not to delete intemediates
142-
--mod MOD Determinstic (det) or probabilistic (prob) tracking.
143-
Default is det.
144-
--tt TT Tracking approach: local or particle. Default is
145-
local.
146-
--mf MF Diffusion model: csd or csa. Default is csd.
147-
--sp SP Space for tractography. Default is native.
148-
--seeds SEEDS Seeding density for tractography. Default is 20.
149-
--modif MODIF Name of folder on s3 to push to. If empty, push to a
150-
folder with m2g's version number.
151-
152-
In order to share data between our container and the rest of our machine in Docker, we need to mount a volume. Docker does this with the -v flag. Docker expects its input formatted as: ``-v path/to/local/data:/path/in/container``. We'll do this when we launch our container, as well as give it a helpful name so we can locate it later on.
153-
154-
**To run m2g on data** ::
155-
156-
docker run -ti -v /path/to/local/data:/data neurodata/m2g /data/ /data/outputs
157-
158-
159-
Pip
160-
-------------
161-
162-
m2g relies on FSL, Dipy, networkx, and nibabel, numpy scipy, scikit-learn, scikit-image, nilearn. You should install FSL through the instructions on their website, then follow install other Python dependencies with the following::
163-
164-
pip install m2g
165-
166-
The only known packages which require a specific version are plotly and networkx, due to backwards-compatability breaking changes.
167-
168-
Installation shouldn't take more than a few minutes, but depends on your internet connection.
169-
170-
Github
171-
-----------
172-
173-
To install directly from Github, run::
174-
175-
git clone https://github.com/neurodata/m2g
176-
cd m2g
177-
python setup.py install
102+
boto3==1.28.4
103+
configparser>=3.7.4
104+
dipy==0.16.0
105+
graspologic>=3.3.0
106+
networkx==2.3
107+
nibabel==2.5.0
108+
nilearn==0.5.2
109+
numpy==1.17.0

docs/links.rst

+23
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
.. _Nipype: https://nipype.readthedocs.io/en/latest/
2+
.. _BIDS: https://bids.neuroimaging.io/
3+
.. _`BIDS Derivatives`: https://bids-specification.readthedocs.io/en/stable/05-derivatives/01-introduction.html
4+
.. _Installation: installation.html
5+
.. _workflows: workflows.html
6+
.. _FSL: https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/
7+
.. _ANTs: https://stnava.github.io/ANTs/
8+
.. _FreeSurfer: https://surfer.nmr.mgh.harvard.edu/
9+
.. _`submillimeter reconstruction`: https://surfer.nmr.mgh.harvard.edu/fswiki/SubmillimeterRecon
10+
.. _`mri_robust_template`: https://surfer.nmr.mgh.harvard.edu/fswiki/mri_robust_template
11+
.. _AFNI: https://afni.nimh.nih.gov/
12+
.. _GIFTI: https://www.nitrc.org/projects/gifti/
13+
.. _`Connectome Workbench`: https://www.humanconnectome.org/software/connectome-workbench.html
14+
.. _`HCP Pipelines`: https://humanconnectome.org/software/hcp-mr-pipelines/
15+
.. _`Docker Engine`: https://www.docker.com/products/container-runtime
16+
.. _`Docker installation`: https://docs.docker.com/install/
17+
.. _`Docker Hub`: https://hub.docker.com/r/nipreps/fmriprep/tags
18+
.. _Singularity: https://github.com/singularityware/singularity
19+
.. _SPM: https://www.fil.ion.ucl.ac.uk/spm/software/spm12/
20+
.. _TACC: https://www.tacc.utexas.edu/
21+
.. _tedana: https://github.com/me-ica/tedana
22+
.. _`T2* workflow`: https://tedana.readthedocs.io/en/latest/generated/tedana.workflows.t2smap_workflow.html#tedana.workflows.t2smap_workflow # noqa
23+
.. _`citation boilerplate`: https://www.nipreps.org/intro/transparency/#citation-boilerplates

0 commit comments

Comments
 (0)