Code for training semantic segmentation models on the Pannuke and Lizard datasets.
- Python 3.8+
pip install -r requirements.txt
- Download all three folds of the Pannuke dataset from here.
- Extract all archives into one directory (e.g.
pannuke/
). - To convert the
.npy
files to images, runpython scripts/process_pannuke -i path/to/pannuke
.
- Note: The original
masks.npy
andtypes.npy
files for each fold are used for evaluation.- i.e.
--test_target_path fold1/masks/fold1/masks.npy --test_types_path fold1/images/types.npy
- i.e.
- Download the patch-level Lizard dataset from here (registration required) and label files from here.
- Extract all archives into one directory (e.g.
lizard/
). - To convert the
.npy
files to images runpython scripts/process_lizard.py -i path/to/lizard
- To generate target files run
python scripts/create_targets_lizard.py -i path/to/processed/lizard
- These
.npy
files are used for evaluation (i.e.--test_target_path targets-fold1.npy
).
- These
- To train from scratch on Pannuke run:
python train.py --gpus 1 --precision 16 --name pannuke --model.n_classes 6 --max_epoch 100
--data.train_fold "['data/pannuke/fold1/', 'data/pannuke/fold2']"
--data.test_fold data/pannuke/fold3 --test pannuke
--test_targets_path eval/targets/pannuke-f3.py --test_types_path eval/types/pannuke-f3.npy
configs/
contains example configuration files and can be run withpython train.py --config path/to/config/file
.- Run
python train.py --help
for information on all options. - This codebase uses the Segmentation Models PyTorch library and therefore
--model.arch
can be any of architectures from here and--model.encoder
can be any of the encoders from here.
- To generate pseudo-label masks for a dataset using a trained segmentation model run:
python scripts/generate_masks.py -w path/to/segmentation/checkpoint -i path/to/image/directory
-o path/to/output/masks/directory --n_classes <number-of-classes>
Evaluation code is taken from the official Pannuke evaluation repo.