Skip to content

Commit 70149cf

Browse files
committed
docs: up readme add changelog
1 parent 5b67ac7 commit 70149cf

File tree

2 files changed

+114
-16
lines changed

2 files changed

+114
-16
lines changed

CHANGELOG.md

+11
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,15 @@
11

2+
<a id='changelog-0.1.23'></a>
3+
# 0.1.23 — 2023-09-21
4+
5+
## Docs
6+
7+
- Add `CPP-Net` example trainng with Pannuke dataset.
8+
9+
## Features
10+
11+
- Add `CPP-Net`. https://arxiv.org/abs/2102.06867
12+
213
<a id='changelog-0.1.23'></a>
314
# 0.1.23 — 2023-09-19
415

README.md

+103-16
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@
2727
## Features
2828

2929
- High level API to define cell/nuclei instance segmentation models.
30-
- 5 cell/nuclei instance segmentation models and more to come.
30+
- 6 cell/nuclei instance segmentation models and more to come.
3131
- Open source datasets for training and benchmarking.
3232
- Pre-trained backbones/encoders from the [timm](https://github.com/huggingface/pytorch-image-models) library.
3333
- Pre-trained transformer backbones like [DinoV2](https://arxiv.org/abs/2304.07193) and [SAM](https://ai.facebook.com/research/publications/segment-anything/).
@@ -64,24 +64,107 @@ pip install cellseg-models-pytorch[all]
6464
| [[3](#Citation)] Omnipose | https://www.biorxiv.org/content/10.1101/2021.11.03.467199v2 |
6565
| [[4](#Citation)] Stardist | https://arxiv.org/abs/1806.03535 |
6666
| [[5](#Citation)] CellVit-SAM | https://arxiv.org/abs/2306.15350.03535 |
67+
| [[6](#Citation)] CPP-Net | https://arxiv.org/abs/2102.0686703535 |
6768

6869
## Datasets
6970

7071
| Dataset | Paper |
7172
| ----------------------------- | ------------------------------------------------------------------------------------------------ |
72-
| [[6, 7](#References)] Pannuke | https://arxiv.org/abs/2003.10778 , https://link.springer.com/chapter/10.1007/978-3-030-23937-4_2 |
73-
| [[8](#References)] Lizard | http://arxiv.org/abs/2108.11195 |
73+
| [[7, 8](#References)] Pannuke | https://arxiv.org/abs/2003.10778 , https://link.springer.com/chapter/10.1007/978-3-030-23937-4_2 |
74+
| [[9](#References)] Lizard | http://arxiv.org/abs/2108.11195 |
7475

7576
## Notebook examples
7677

77-
- [Training Hover-Net with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_hovernet.ipynb). Here we train the Hover-Net nuclei segmentation model with an `imagenet` pretrained `resnet50` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/) (with checkpointing).
78-
- [Training Stardist with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_stardist.ipynb). Here we train the Stardist multi-class nuclei segmentation model with an `imagenet` pretrained `efficientnetv2_s` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
79-
- [Training CellPose with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellpose.ipynb). Here we train the CellPose multi-class nuclei segmentation model with an `imagenet` pretrained `convnext_small` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface.
78+
<details>
79+
<summary style="margin-left: 25px;"> Training Hover-Net with Pannuke</summary>
80+
<div style="margin-left: 25px;">
81+
82+
- [Training Hover-Net with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_hovernet.ipynb). Here we train the `Hover-Net` nuclei segmentation model with an `imagenet` pretrained `resnet50` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
83+
84+
</div>
85+
</details>
86+
87+
<details>
88+
<summary style="margin-left: 25px;">Training Stardist with Pannuke</summary>
89+
<div style="margin-left: 25px;">
90+
91+
- [Training Stardist with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_stardist.ipynb). Here we train the `Stardist` multi-class nuclei segmentation model with an `imagenet` pretrained `efficientnetv2_s` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
92+
93+
</div>
94+
</details>
95+
96+
<details>
97+
<summary style="margin-left: 25px;">Training CellPose with Pannuke</summary>
98+
<div style="margin-left: 25px;">
99+
100+
- [Training CellPose with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellpose.ipynb). Here we train the `CellPose` multi-class nuclei segmentation model with an `imagenet` pretrained `convnext_small` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface.
101+
102+
</div>
103+
</details>
104+
105+
<details>
106+
<summary style="margin-left: 25px;">Training OmniPose with Pannuke</summary>
107+
<div style="margin-left: 25px;">
108+
80109
- [Training OmniPose with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_omnipose.ipynb). Here we train the OmniPose multi-class nuclei segmentation model with an `imagenet` pretrained `focalnet_small_lrf` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface.
110+
111+
</div>
112+
</details>
113+
114+
<details>
115+
<summary style="margin-left: 25px;">Training CPP-Net with Pannuke</summary>
116+
<div style="margin-left: 25px;">
117+
118+
- [Training CPP-Net with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cppnet.ipynb). Here we train the CPP-Net multi-class nuclei segmentation model with an `imagenet` pretrained `efficientnetv2_s` backbone from the `timm` library. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
119+
120+
</div>
121+
</details>
122+
123+
<details>
124+
<summary style="margin-left: 25px;">Finetuning CellPose with DINOv2 backbone</summary>
125+
<div style="margin-left: 25px;">
126+
81127
- [Finetuning CellPose with DINOv2 backbone for Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellpose_dinov2.ipynb). Here we finetune the CellPose multi-class nuclei segmentation model with a `LVD-142M` pretrained `DINOv2` backbone. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [lightning](https://lightning.ai/docs/pytorch/latest/).
82-
- [Finetuning CellVit-SAM with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellvit.ipynb). Here we finetune the CellVit-SAM multi-class nuclei segmentation model with a `SA-1B` pretrained SAM-image-encoder backbone (checkout [`SAM`](https://github.com/facebookresearch/segment-anything)). The encoder is transformer based `VitDet`-model. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface.
83-
- [Training CellPose with Lizard](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/lizard_nuclei_segmentation_cellpose.ipynb). Train the Cellpose model with Lizard dataset that is composed of varying sized images.
84-
- [Benchmarking Cellpose Trained on Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_cellpose_benchmark.ipynb). Benchmark Cellpose trained on Pannuke. Both the model performance and latency.
128+
129+
</div>
130+
</details>
131+
132+
<details>
133+
<summary style="margin-left: 25px;">Finetuning CellVit-SAM with Pannuke</summary>
134+
<div style="margin-left: 25px;">
135+
136+
- [Finetuning CellVit-SAM with Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_nuclei_segmentation_cellvit.ipynb). Here we finetune the CellVit-SAM multi-class nuclei segmentation model with a `SA-1B` pretrained `SAM`-image-encoder backbone (checkout [`SAM`](https://github.com/facebookresearch/segment-anything)). The encoder is transformer based `VitDet`-model. The Pannuke dataset (fold 1 & fold 2) are used for training data and the fold 3 is used as validation data. The model is trained (with checkpointing) by utilizing [accelerate](https://huggingface.co/docs/accelerate/index) by hugginface.
137+
138+
</div>
139+
</details>
140+
141+
142+
<details>
143+
<summary style="margin-left: 25px;">Benchmarking Cellpose Trained on Pannuke</summary>
144+
<div style="margin-left: 25px;">
145+
146+
- [Benchmarking Cellpose Trained on Pannuke](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/pannuke_cellpose_benchmark.ipynb). Here we run benchmarking for `Cellpose` that was trained on Pannuke. Both the model performance and latency benchmarking are covered.
147+
148+
</div>
149+
</details>
150+
151+
<details>
152+
<summary style="margin-left: 25px;">Training CellPose with Lizard</summary>
153+
<div style="margin-left: 25px;">
154+
155+
- [Training CellPose with Lizard](https://github.com/okunator/cellseg_models.pytorch/blob/main/examples/lizard_nuclei_segmentation_cellpose.ipynb). Here we train the `Cellpose` model with Lizard dataset that is composed of varying sized images. This example is old and might not be up to date.
156+
157+
</div>
158+
</details>
159+
160+
161+
162+
163+
164+
165+
166+
167+
85168

86169
## Code Examples
87170

@@ -260,10 +343,13 @@ With the function API, you can build models with low effort by calling the below
260343
| `csmp.models.stardist_base` | `"stardist"`, `"dist"` | **binary instance segmentation** |
261344
| `csmp.models.stardist_base_multiclass` | `"stardist"`, `"dist"`, `"type"` | **instance segmentation** |
262345
| `csmp.models.stardist_plus` | `"stardist"`, `"dist"`, `"type"`, `"sem"` | **panoptic segmentation** |
263-
| `csmp.models.cellvit_sam_base` | `"type"`, `"inst"`, `"hovernet"` | **instance segmentation** |
264-
| `csmp.models.cellvit_sam_plus` | `"type"`, `"inst"`, `"hovernet"`, `"sem"` | **panoptic segmentation** |
265-
| `csmp.models.cellvit_sam_small` | `"type"`,`"hovernet"` | **instance segmentation** |
266-
| `csmp.models.cellvit_sam_small_plus` | `"type"`, `"hovernet"`, `"sem"` | **panoptic segmentation** |
346+
| `csmp.models.cppnet_base` | `"stardist_refined"`, `"dist"` | **binary instance segmentation** |
347+
| `csmp.models.cppnet_base_multiclass` | `"stardist_refined"`, `"dist"`, `"type"` | **instance segmentation** |
348+
| `csmp.models.cppnet_plus` | `"stardist_refined"`, `"dist"`, `"type"`, `"sem"` | **panoptic segmentation** |
349+
| `csmp.models.cellvit_sam_base` | `"type"`, `"inst"`, `"hovernet"` | **instance segmentation** |
350+
| `csmp.models.cellvit_sam_plus` | `"type"`, `"inst"`, `"hovernet"`, `"sem"` | **panoptic segmentation** |
351+
| `csmp.models.cellvit_sam_small` | `"type"`,`"hovernet"` | **instance segmentation** |
352+
| `csmp.models.cellvit_sam_small_plus` | `"type"`, `"hovernet"`, `"sem"` | **panoptic segmentation** |
267353

268354
## References
269355

@@ -272,9 +358,10 @@ With the function API, you can build models with low effort by calling the below
272358
- [3] Cutler, K. J., Stringer, C., Wiggins, P. A., & Mougous, J. D. (2022). Omnipose: a high-precision morphology-independent solution for bacterial cell segmentation. bioRxiv. doi:10.1101/2021.11.03.467199
273359
- [4] Uwe Schmidt, Martin Weigert, Coleman Broaddus, & Gene Myers (2018). Cell Detection with Star-Convex Polygons. In Medical Image Computing and Computer Assisted Intervention - MICCAI 2018 - 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part II (pp. 265–273).
274360
- [5] Hörst, F., Rempe, M., Heine, L., Seibold, C., Keyl, J., Baldini, G., Ugurel, S., Siveke, J., Grünwald, B., Egger, J., & Kleesiek, J. (2023). CellViT: Vision Transformers for Precise Cell Segmentation and Classification (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2306.15350.
275-
- [6] Gamper, J., Koohbanani, N., Benet, K., Khuram, A., & Rajpoot, N. (2019) PanNuke: an open pan-cancer histology dataset for nuclei instance segmentation and classification. In European Congress on Digital Pathology (pp. 11-19).
276-
- [7] Gamper, J., Koohbanani, N., Graham, S., Jahanifar, M., Khurram, S., Azam, A.,Hewitt, K., & Rajpoot, N. (2020). PanNuke Dataset Extension, Insights and Baselines. arXiv preprint arXiv:2003.10778.
277-
- [8] Graham, S., Jahanifar, M., Azam, A., Nimir, M., Tsang, Y.W., Dodd, K., Hero, E., Sahota, H., Tank, A., Benes, K., & others (2021). Lizard: A Large-Scale Dataset for Colonic Nuclear Instance Segmentation and Classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 684-693).
361+
- [6] Chen, S., Ding, C., Liu, M., Cheng, J., & Tao, D. (2023). CPP-Net: Context-Aware Polygon Proposal Network for Nucleus Segmentation. In IEEE Transactions on Image Processing (Vol. 32, pp. 980–994). Institute of Electrical and Electronics Engineers (IEEE). https://doi.org/10.1109/tip.2023.3237013
362+
- [7] Gamper, J., Koohbanani, N., Benet, K., Khuram, A., & Rajpoot, N. (2019) PanNuke: an open pan-cancer histology dataset for nuclei instance segmentation and classification. In European Congress on Digital Pathology (pp. 11-19).
363+
- [8] Gamper, J., Koohbanani, N., Graham, S., Jahanifar, M., Khurram, S., Azam, A.,Hewitt, K., & Rajpoot, N. (2020). PanNuke Dataset Extension, Insights and Baselines. arXiv preprint arXiv:2003.10778.
364+
- [9] Graham, S., Jahanifar, M., Azam, A., Nimir, M., Tsang, Y.W., Dodd, K., Hero, E., Sahota, H., Tank, A., Benes, K., & others (2021). Lizard: A Large-Scale Dataset for Colonic Nuclear Instance Segmentation and Classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 684-693).
278365

279366
## Citation
280367

0 commit comments

Comments
 (0)