This repo contains the code for ROI-FCN: ROI convolution based FCN, described in the following paper.
If you find it useful in your research, please consider citing:
@inproceedings{zhang2018end,
title={End-to-end detection-segmentation network with ROI convolution},
author={Zhang, Zichen and Tang, Min and Cobzas, Dana and Zonoobi, Dornoosh and Jagersand, Martin and Jaremko, Jacob L},
booktitle={2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018)},
pages={1509--1512},
year={2018},
organization={IEEE}
}
It is largely based on the Faster R-CNN code The key difference is that we add the ROI convolution layer in caffe
Code is provided as-is, no updates expected.
- Clone this repository
# Make sure to clone with --recursive
git clone --recursive https://github.com/vincentzhang/roi-fcn.git
If you didn't clone with the --recursive
flag, then you'll need to manually clone the caffe-roi
submodule:
git submodule update --init --recursive
- Build Caffe and pycaffe
Note: Caffe must be built with support for Python layers!
# ROOT refers to the directory that you cloned this repo into. cd $ROOT/caffe-roi # Now follow the Caffe installation instructions here: # http://caffe.berkeleyvision.org/installation.html # In your Makefile.config, make sure to have this line uncommented WITH_PYTHON_LAYER := 1 # Unrelatedly, it's also recommended that you use CUDNN USE_CUDNN := 1 # Compile make -j8 && make pycaffe
You can download my Makefile.config for reference.
-
Build the Cython modules
cd $ROOT/lib make
-
Download the ImageNet pre-trained VGG16 weights (adapted to be fully convolutional):
cd $ROOT/data/scripts ./fetch_vgg16_fcn.sh
This will populate the
$ROOT/data/imagenet_models
folder withVGG16.v2.fcn-surgery-all.caffemodel
.
To run the demo, first download the pretrained weights:
cd $ROOT/data/scripts
./fetch_socket_models.sh
Run the demo script:
cd $ROOT
python ./tools/demo.py
The demo runs the segmentation network trained on the acetabulum data used in the paper.
To show the generalization of the algorithm, the input images stored in $ROOT/data/samples
are anonymized clinical images that are not in the training or testing dataset.
We are not allowed to share the dataset due to privacy restrictions. But we're providing the workflow for training on your own dataset and the key files that need to be modified:
-
Entry point: a bash script in the experiments directory that specifies some hyperparameters
Example:
$ ./experiments/scripts/socket_scratch_n_1e-4_fg150_roils_end2end.sh 0 VGG16 socket
-
Most of the configs files that specifies the caffe solver and network would not be very different but you would need to write you own data loader following this file as an example:
lib/datasets/socket.py
. The functiongt_roidb()
generates or load a numpy file of the ground truth bounding boxes which you would need to create offline beforehand. -
Create symlinks for your dataset
cd $ROOT/data ln -s SOURCE_PATH_TO_YOUR_DATA TARGET_PATH
The following code runs the trained models on the entire test dataset:
./experiments/scripts/test_socket_scratch_n_1e-4_fg150_roils.sh test all 4586 1 16 1
For more information, please see the inline documentation in the code.