High-throughput bright-field and fluorescence microscopy analysis for bacteria
BacFluoMap is a Python-based tool designed for the high-throughput analysis of phase-contrast and fluorescence microscopy images of bacteria. It integrates deep learning models for instance segmentation and object classification, enabling researchers to extract meaningful insights from microscopy data under various experimental conditions.
- Instance Segmentation: Utilizes a 3-class U-Net model for segmentation of bacterial cells.
- Object Classification: Employs a YOLO model to classify bacterial growth stages (e.g., dividing, rod-shaped, intermediate).
- Fluorescence Intensity Profiling: Extracts fluorescence intensity profiles along the long axis of bacterial cells.
- Customizable Analysis: Supports multiple algorithms for axis determination, including skeletonization, PCA, and image moments.
- High-Throughput: Optimized for processing large datasets with parallelization support.
-
It is recommanded to install python 3.12.6 (download: Windows, Mac)
-
Optional: Create an virtual environment to keep you libraries under control
- Windows:
python -m venv .venv .venv\Scripts\activate
- Mac/Linux:
python3.12 -m venv .venv source .venv/bin/activate - The consolue should start with a
(.venv). Don't forget to select this environment (active) each time you restart your console.
- Windows:
-
Clone this repository:
git clone https://github.com/CURTLab/BacFluoMap.gitinto a folder -
Change directory to the root of the cloned repository in the terminal:
cd BacFluoMap -
Install BacFluoMap as editable library:
- Windows:
python -m pip install . - Mac/Linux:
python3.12 -m pip install .
- Windows:
-
(Optional) For pytorch with GPU support on Windows or Linux with a NVIDIA reinstall pytorch 2.6.0:
- CUDA 11.8:
pip install --force-reinstall torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu118 - CUDA 12.4:
pip install --force-reinstall torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu124 - CUDA 12.6:
pip install --force-reinstall torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu126
- CUDA 11.8:
-
Pretrained Models: Pretrained YOLO and U-Net models required for analysis can be found in the initial release. These models should be copied to the
models/directory. -
Images for Figures: All images used for generating figures in the manuscript are included in the initial release. These images should be copied to the
data/directory.
Prepare your microscopy images in the supported format. BacFluoMap supports multi-channel images (e.g., phase-contrast and fluorescence channels).
Use the provided Python scripts to process your dataset. For example:
from bacfluomap import fluomap, dataset
from skimage.io import imread
phase_contrast = imread("phase_contrast.tif")
fluorescence = imread("fluorescence.tif")
# Create your dataset
ds = dataset.Dataset("simulation",
{"Phase contrast": [phase_contrast], "Fluorescence": [fluorescence]},
"../models/bacteria_long_cpu_v3.pt")
# Run fluorescence mapping
results = fluomap.calculate_fluomaps(
ds,
# Path to the YOLO model for classification of bacteria states
model_path="models/bac_YOLOv11m_v2.pt",
# Path directory to save the results
results_dir="output/",
# Classes of the YOLO model to use in the analysis (Available: 'Microcolony', 'Dividing', 'Rod', 'Defocused', 'Intermediate')
# but analyis on 'Microcolony' & 'Defocused' will fail!
selected_classes = ['Dividing', 'Rod'],
# Reduction method for the fluorescence intensity profile
profile_reduction="max",
# Use the maximum number of workers available
max_workers=-1,
# Save single cell results sorted by class and segmented phase contrast images
save_figs=True
)
print(results)BacFluoMap generates various outputs, including fluorescence intensity profiles, segmentation masks, and classification results. These can be visualized using the provided plotting utilities or external tools.
The notebooks/ directory contains Jupyter notebooks for generating all fluorescence intensity profiles presented in the manuscript. These notebooks serve as a guide for reproducing the results and adapting the analysis to your own data.
To run these script you can use:
- Visual Studio Code with jupyter/python plugin. On Windows visual studio code can be started from the console using the command
code .. Select the kernel (python verison where this package is installed or created venv). - JupyterLabs/Jupyter Notebook: Follow these instructions and run
jupyter labthis command for JupyterLabs from the root of the BacFluoMap folder in the console. This should open a browser window. - Select the
notebooks/folder on the left sidebar, and then a notebook (file extention *.ipynb) of your choice to run.
BacFluoMap supports training of custom models for both instance segmentation and object classification. For training of both YOLO and the U-Net module a compute with at least 32 GB of RAM and a NVIDIA CUDA capable GPU with at least 6GB VRAM is recommended.
Follow these steps to train on your own data:
- Annotate your images for object classification use a tool like COCO Annotator.
- Ensure your dataset is in the correct format (COCO json format for YOLO training). Run the python script in
training/convert_coco.pyto convert the json COCO format to ultralytics yaml structure. Ensure that your images are in the correct relative path for the COCO json file.
BacFluoMap/
│── training/
│── Bacteria-20.json
│ ├── datasets
│ ├── Bacteria
│ ├── 0000.png
│ ├── 0001.png
│ ├── ...
Run the converter script and adpat the path to your data using the json_file argument:
python ./training/convert_coco.py --json_file=./training/Bacteria-20.json --save_dir=./training/yolo_datasetTrain the YOLO model for object classification using the provided script:
python training/train_yolo.py --data_path=./training/yolo_dataset/dataset.yaml- Annotate your images for instance segmentation (U-Net) using a tool like Napari.
- Copy your annotated and phase-contrast images (phase-contrast images in
pngformat and annotated images intifformat) into a train and val folder (10% split is recommended).
BacFluoMap/
│── training/
│ ├── unet_dataset
│ ├── train
│ ├── HADA + MAC 8 min_000.png # phase-contrast
│ ├── HADA + MAC 8 min_000.tif # annotated
│ ├── ...
│ ├── val
│ ├── HADA + MAC 8 min_023.png # phase-contrast
│ ├── HADA + MAC 8 min_023.tif # annotated
│ ├── ...
Use the provided training script to train a 3-class U-Net model for instance segmentation:
python ./training/train_unet.py --train_path=./training/unet_dataset/train --val_path=./training/unet_dataset/valOnce trained, copy your custom models to the models/ directory and update the model_path parameter in your analysis scripts.
- https://github.com/qubvel-org/segmentation_models.pytorch (MIT license)
- https://github.com/wolny/pytorch-3dunet (MIT license)
If you use BacFluoMap in your research, please cite our publication: van ‘t Wout, M.F.L., Hauser, F., Holzapfel, P.I.P. et al. Bactericidal membrane attack complex formation initiates at the new pole of E. coli. EMBO Rep (2025). https://doi.org/10.1038/s44319-025-00669-1
This project is licensed under the AGPL-3.0 license.
Contributions are welcome! Please open an issue or submit a pull request for any improvements or bug fixes.