This is the implementation of the PI-DDPM network using a basic UNet architecture as backbone.
To prepare your environment please install dependencies using pip and the provided requirements.txt file:
pip install -r requirements.txt
Follow these instructions to install the project:
- Clone the repository:
git clone https://github.com/casus/pi-ddpm.git
- Navigate to the project directory:
cd pi-ddpm
- Run in your environment
pip install -r requirements.txt
To run the project demo with simulated data, follow these instructions:
- Generate synthetic sample using the function
generate_synthetic_sample
with your desired parameters. Use the provided modemetaballs
for simple figure generation without the need to download additional data for demo purposes. - Store the generated PSFs and ground truth images into npz files.
- If you want to use your own data, you can only store the generated PSFs and the desired ground-truth data, the code will convolve the PSFs with your data and generate the simulated widefield/confocal images.
- run
python train_ddpm.py
orpython train_unet.py
with paths to your generated datasets.
- After training the model, run the
test_diffusion
script.
python test_diffusion.py
- You can change the regularization type and strength in the parameters of the function.
- The provided teaser file has some widefield samples and some confocal samples you can run the model on both to see the differences.
- The output images will be saved in
./imgs_output/testing/reconstructions_confocal.npz
for the confocal teaser and./imgs_output/testing/reconstructions_widefield.npz
for the widefield images. - The inference should take 23.07s in a computer with i9-7900x cpu, and a RTX 3090 TI for each modality.
After training the model you should see the following reconstructed images.