Skip to content

Commit 6bc69b2

Browse files
authored
Update README.md
1 parent 55bfb29 commit 6bc69b2

File tree

1 file changed

+12
-11
lines changed

1 file changed

+12
-11
lines changed

README.md

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,18 @@
1-
# MaskGAN for Unpaired MR-to-CT Translation
1+
# MaskGAN for Unpaired MR-to-CT Translation [MICCAI2023]
22

33
[![arXiv](https://img.shields.io/badge/arXiv-2311.12437-blue)](https://arxiv.org/pdf/2307.16143) [![cite](https://img.shields.io/badge/cite-BibTex-yellow)](cite.bib)
44

5-
## Updates!
5+
## 📢 Updates!
66

77
* New publication is coming out! **Mixed-view Refinement MaskGAN: Anatomical Preservation for Unpaired MRI-to-CT Synthesis**. Stay tuned!
88
* **Refinement MaskGAN version**: An extended model, which refined images using a simple, yet effective multi-stage, multi-plane approach is develop to improve the volumetric definition of synthetic images.
99
* **Model enhancements**: We include selection strategies to choose similar MRI/CT matches based on the position of slices.
1010

11-
## MaskGAN Framework
11+
## 🏆 MaskGAN Framework
1212

13-
A novel unsupervised MR-to-CT synthesis method that preserves the anatomy under the explicit supervision of coarse masks without using costly manual annotations. MaskGAN bypasses the need for precise annotations, replacing them with standard (unsupervised) image processing techniques, which can produce coarse anatomical masks.
14-
Such masks, although imperfect, provide sufficient cues for MaskGAN to capture anatomical outlines and produce structurally consistent images.
13+
A novel unsupervised MR-to-CT synthesis method
14+
- preserves the anatomy under the explicit supervision of coarse masks without using costly manual annotations. MaskGAN bypasses the need for precise annotations, replacing them with standard (unsupervised) image processing techniques, which can produce coarse anatomical masks.
15+
- shape consistency loss to preserve the overall structure of images after a cycle of translation.
1516

1617
![Framework](./imgs/maskgan_v2.svg)
1718

@@ -22,7 +23,7 @@ Such masks, although imperfect, provide sufficient cues for MaskGAN to capture a
2223
The repository offers the official implementation of our paper in PyTorch.
2324

2425

25-
## Installation
26+
## 🛠️ Installation
2627
### Option 1: Directly use our Docker image
2728
- We have created a public docker image `stevephan46/maskgan:d20b79d4731210c9d287a370e37b423006fd1425`.
2829
- Script to pull docker image and run docker container for environment setup:
@@ -42,10 +43,10 @@ docker run --name maskgan --gpus all --shm-size=16g -it -v /path/to/data/root:/d
4243
pip install -r requirements.txt
4344
```
4445

45-
## Dataset Preparation and Mask Generations
46+
## 📚 Dataset Preparation and Mask Generations
4647
Refer to [preprocess/README.md](./preprocess/README.md) file.
4748

48-
## MaskGAN Training and Testing
49+
## 🚀 MaskGAN Training and Testing
4950
- Sampled training script is provided in train.sh
5051
- Modify image augmentations as needed `--load_size` (resize one dimension to be a fixed size), `--pad_size` (pad both dimensions to an equal size), `--crop_size` (crop both dimensions to an equal size).
5152
- Train a model:
@@ -69,10 +70,10 @@ sh test.sh
6970
```
7071
- The results will be saved at `./results/exp_name`. Use `--results_dir {directory_path_to_save_result}` to specify the results directory. There will be four folders `fake_A`, `fake_B`, `real_A`, `real_B` created in `results`.
7172

72-
## Evaluate results
73+
## 🔍 Evaluate results
7374
- The script `eval.sh` is provided as an example. Modify the variable `exp_name` to match your experiment name specified by parameter `--name` when running test.py.
7475

75-
## Citation
76+
## 📜 Citation
7677
If you use this code for your research, please cite our papers.
7778

7879
```
@@ -84,6 +85,6 @@ If you use this code for your research, please cite our papers.
8485
}
8586
```
8687

87-
## Acknowledgments
88+
## 🙏 Acknowledgments
8889
This source code is inspired by [CycleGAN](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix) and [AttentionGAN](https://github.com/Ha0Tang/AttentionGAN).
8990

0 commit comments

Comments
 (0)