Skip to content
/ mml Public

Medical Meta Learner - Leveraging knowledge from previous tasks

License

Notifications You must be signed in to change notification settings

IMSY-DKFZ/mml

Repository files navigation

Medical Meta Learner

(Please note: badges are not yet finalized and are currently prepared for GitHub transition)

docs status CI status pypi Badge license
Python pytorch lightning hydra
Ruff isort


About

mml is a research-oriented Python package which aims to provide an easy and scalable way of performing deep learning on multiple image tasks (see Meta-Learning).

It features:

  • a clear methodology to store, load, refer, modify and combine RGB image datasets across task types (classification, segmentation, ...)
  • a highly configurable CLI for the full deep learning pipeline
  • a dedicated file management system, capable of continuing aborted experiments, reuse previous results and parallelize runs
  • an api for interactive pre- and post-experiment exploration
  • smooth integration of latest deep learning libraries (lightning, hydra, optuna, ...)
  • easy expandability via plugins or directly hooking into runtime objects via scripts or notebooks
  • good documentation, broad testing and ambitious goals

Please read the official documentation page for more. Main author: Patrick Godau, Deutsches Krebsforschungszentrum (DKFZ) Heidelberg

Division of Intelligent Medical Systems

Contact: patrick.godau@dkfz-heidelberg.de

Setup

Create a virtual environment (e.g. using conda) as follows:

conda create -n mml python=3.10
conda activate mml

Now install the core of mml via

pip install mml-core

plugins

Plugins extend mml functionality. See here for a list of available plugins. They are installable exactly like the previous pip command, just replace mml-core with one of the plugins to install. Nevertheless, some plugins require additional setup steps. Check with the README of the specific plugin for details.

local environment variables

mml relies on a mml.env file for relevant environment variables. There are multiple possibilities to locate this:

  • within your project folder (e.g. for separation of mml installations),
  • within your home folder or similar (e.g. for shared mml configs across installations)

You can use mml-env-setup from the command line at the location you want to place your mml.env file:

cd /path/to/your/config/location
mml-env-setup

Now you only need to pinpoint mml to your mml.env file. This can be done via an environment variable MML_ENV_PATH that needs to be present in the environment before starting MML. If you use conda this simplifies to

conda env config vars set MML_ENV_PATH=/path/to/your/config/location/mml.env
# if your file is located at the current working directory, you may instead use
# pwd | conda env config vars set MML_ENV_PATH=$(</dev/stdin)/mml.env
# either way this requires re-activation of environment
conda activate mml
# test if the path is set
echo $MML_ENV_PATH

You should see your path printed - if yes, continue providing the actual variables:

  • open mml.env in your preferred editor
  • set MML_DATA_PATH to the path you want to store downloaded or generated datasets later on
  • set MML_RESULTS_PATH to be the location you want to save your experiments in later on (plots, trained network parameters, caluclated distances, etc.).
  • set MML_LOCAL_WORKERS to be the number of usable (virtual) cpu cores
  • all other variables are optional for now

Confirm installation

You can confirm that mml was installed successful via running mml in the terminal, which should result in a display of an MML logo.

License

This library is licensed under the permissive MIT license, which is fully compatible with both academic and commercial applications.

Copyright German Cancer Research Center (DKFZ) and contributors. Please make sure that your usage of this code is in compliance with its license. This project is/was supported by

If you use this code in a research paper, please cite:

@InProceedings{Godau2021TaskF,
    author="Godau, Patrick and Maier-Hein, Lena",
    editor="de Bruijne, Marleen and Cattin, Philippe C. and Cotin, St{\'e}phane and Padoy, Nicolas and Speidel, Stefanie and Zheng, Yefeng and Essert, Caroline",
    title="Task Fingerprinting for Meta Learning inBiomedical Image Analysis",
    booktitle="Medical Image Computing and Computer Assisted Intervention -- MICCAI 2021",
    year="2021",
    publisher="Springer International Publishing",
    pages="436--446"
}

About

Medical Meta Learner - Leveraging knowledge from previous tasks

Resources

License

Stars

Watchers

Forks

Packages

No packages published