VINNA4neonates is a network for the segmentation of human newborn brain MRI data.
The preferred way of installing and running VINNA4neonates is via Singularity or Docker containers. We provide pre-build images at Dockerhub.
We also provide information on a native install on some operating systems, but since dependencies may vary, this can produce results different from our testing environment and we may not be able to support you if things don't work. Our testing is performed on Ubuntu 20.04 via our provided Docker images.
Recommended System Spec: 8 GB system memory, NVIDIA GPU with 8 GB graphics memory.
Minimum System Spec: 8 GB system memory (this requires running VINNA4neonates on the CPU only, which is much slower)
Assuming you have singularity installed already (by a system admin), you can build an image easily from our Dockerhub images. Run this command from a directory where you want to store singularity images:
singularity build vinna4neonates-gpu.sif docker://deepmi/vinna4neonates:latest
Additionally, the Singularity README contains detailed directions for building your own Singularity images from Docker.
Our README explains how to run VINNA4neonates and you can find details on how to build your own images here: Docker and Singularity.
This is very similar to Singularity. Assuming you have Docker installed (by a system admin) you just need to pull one of our pre-build Docker images from dockerhub:
docker pull deepmi/vinna4neonates:latest
Our README explains how to run VINNA4neonates and you can find details on how to build your own image.
In a native install you need to install all dependencies (distro packages, python dependencies, FastSurfer repo) yourself. Here we will walk you through what you need.
You will need a few additional packages that may be missing on your system (for this you need sudo access or ask a system admin):
sudo apt-get update && apt-get install -y --no-install-recommends \
wget \
git \
ca-certificates \
file
If you are using Ubuntu 20.04, you will need to upgrade to a newer version of libstdc++, as some 'newer' python packages need GLIBCXX 3.4.29, which is not distributed with Ubuntu 20.04 by default.
sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test
sudo apt install -y g++-11
You also need to have bash-4.0 or higher (check with bash --version
).
You also need a working version of python3 (we recommend python 3.9 -- we do not support other versions). These packages should be sufficient to install python dependencies and then run the VINNA4neonates neural network segmentation.
If you are using pip, make sure pip is updated as older versions will fail.
We recommend to install conda as your python environment. If you don't have conda on your system, an admin needs to install it:
wget --no-check-certificate -qO ~/miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-py38_4.11.0-Linux-x86_64.sh
chmod +x ~/miniconda.sh
sudo ~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh
Get FastSurfer from GitHub into your specified directory $BASEDIR. Here you can decide if you want to install the current experimental "dev" version (which can be broken), the "stable" branch (latest version that has been tested thoroughly), or a specific tag (which we use here --> tag v2.2.0):
cd $BASEDIR
RUN git clone --depth 1 --branch v2.2.0 https://github.com/Deep-MI/FastSurfer.git
Get VINNA4neonates from GitHub. Here you can decide if you want to install the current experimental "dev" version (which can be broken) or the "stable" branch (that has been tested thoroughly):
git clone --branch stable https://github.com/Deep-MI/VINNA4neonates.git
cd VINNA4neonates
Create a new environment and install VINNA4neonates dependencies:
conda env create -f ./env/vinna4neonates.yml
conda activate vinna4neonates
Next, add the VINNA4neonates and FastSurfer directory to the python path (make sure you have changed into VINNA4neonates already):
export PYTHONPATH="${PYTHONPATH}:$PWD:$BASEDIR/FastSurfer"
This will need to be done every time you want to run VINNA4neonates, or you need to add this line to your ~/.bashrc
if you are using bash, for example:
echo "export PYTHONPATH=\"\${PYTHONPATH}:$PWD:$BASEDIR/FastSurfer\"" >> ~/.bashrc
You can also download all network checkpoint files (this should be done if you are installing for multiple users):
python3 VINNA4neonatesCNN/download_checkpoints.py --vinna
Once all dependencies are installed, you are ready to run the VINNA4neonates by calling
./run_vinna4neonates.sh ....
, see the README for
command line flags.
Docker can be used on Intel Macs as it should be similarly fast as a native install there. It would allow you to run the full pipeline.
First, install Docker Desktop for Mac. Start it and set Memory to 15 GB under Preferences -> Resources (or the largest you have, if you are below 15GB, it may fail).
Second, pull one of our Docker containers. Open a terminal window and run:
docker pull deepmi/vinna4neonates:latest
Continue with the example in our README.
On modern Macs with the Apple Silicon M1 or M2 ARM-based chips, we recommend a native installation as it runs much faster than Docker in our tests. The experimental support for the built-in AI Accelerator is also only available on native installations. Native installation also supports older Intel chips.
If you do not have git and a recent bash (version > 4.0 required!) installed, install them via the packet manager, e.g. brew. This installs brew and then bash:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install bash
Make sure you use this bash and not the older one provided with MacOS!
Create a python environment, activate it, and upgrade pip. Here we use pip, but you should also be able to use conda for python:
python3 -m venv $HOME/python-envs/vinna4neonates
source $HOME/python-envs/vinna4neonates/bin/activate
python3 -m pip install --upgrade pip
Get FastSurfer from GitHub into your specified directory $BASEDIR. Here you can decide if you want to install the current experimental "dev" version (which can be broken), the "stable" branch (latest version that has been tested thoroughly), or a specific tag (which we use here --> tag v2.2.0):
cd $BASEDIR
RUN git clone --depth 1 --branch v2.2.0 https://github.com/Deep-MI/FastSurfer.git
cd FastSurfer
export PYTHONPATH="${PYTHONPATH}:$PWD
Clone VINNA4neonates:
git clone --branch stable https://github.com/Deep-MI/VINNA4neonates.git
cd VINNA4neonates
export PYTHONPATH="${PYTHONPATH}:$PWD"
Install the VINNA4neonates requirements
python3 -m pip install -r requirements.mac.txt
If this step fails, you may need to edit requirements.mac.txt
and adjust version number to what is available.
On newer M1 Macs, we also had issues with the h5py package, which could be solved by using brew for help (not sure this is needed any longer):
brew install hdf5
export HDF5_DIR="$(brew --prefix hdf5)"
pip3 install --no-binary=h5py h5py
You can also download all network checkpoint files (this should be done if you are installing for multiple users):
python3 VINNA4neonatesCNN/download_checkpoints.py --all
Once all dependencies are installed, run the VINNA4neonates by calling bash ./run_vinna4neonates.sh ....
with the appropriate command line flags, see the README.
Note: You may always need to prepend the command with bash
(i.e. bash run_vinna4neonates.sh <...>
) to ensure that
bash 4.0 is used instead of the system default.
In order to run VINNA4neonates on your Windows system using docker make sure that you have:
installed and running.
After everything is installed, start Windows PowerShell and run the following command to pull the CPU Docker image (check on dockerhub what version tag is most recent for cpu):
docker pull deepmi/vinna4neonates:latest
Now you can run VINNA4neonates the same way as described in our README for the CPU build, for example:
docker run -v C:/Users/user/my_mri_data:/data \
-v C:/Users/user/my_vinna4neonates_analysis:/output \
-v C:/Users/user/my_fs_license_dir:/fs_license \
--rm --user $(id -u):$(id -g) deepmi/vinna4neonates:latest \
--fs_license /fs_license/license.txt \
--t1 /data/subjectX/orig.mgz \
--device cpu \
--sid subjectX --sd /output \
--parallel
Note, the system requirements of at least 8GB of RAM for the CPU version. If the process fails, check if your WSL2 distribution has enough memory reserved.
This was tested using Windows 10 Pro version 21H1 and the WSL Ubuntu 20.04 distribution
In addition to the requirements from the CPU version, you also need to make sure that you have:
- Windows 11 or Windows 10 21H2 or greater,
- the latest WSL Kernel or at least 4.19.121+ (5.10.16.3 or later for better performance and functional fixes),
- an NVIDIA GPU and the latest NVIDIA CUDA driver
- CUDA toolkit installed on WSL, see: CUDA Support for WSL 2
Follow Enable NVIDIA CUDA on WSL to install the correct drivers and software.
After everything is installed, start Windows PowerShell and run the following command to pull the GPU Docker image:
docker pull deepmi/vinna4neonates:latest
Now you can run VINNA4neonates the same way as described in our README, for example:
docker run --gpus all
-v C:/Users/user/my_mri_data:/data \
-v C:/Users/user/my_vinna4neonates_analysis:/output \
-v C:/Users/user/my_fs_license_dir:/fs_license \
--rm --user $(id -u):$(id -g) deepmi/vinna4neonates:latest \
--fs_license /fs_license/license.txt \
--t1 /data/subjectX/orig.mgz \
--sid subjectX --sd /output \
--parallel
Note the system requirements of at least 8 GB system memory and 2 GB graphics memory for the GPU version. If the process fails, check if your WSL2 distribution has enough memory reserved.