Multiplex image processing for challenging datasets with a focus on user integration rather than automation. This pipeline includes 2D/3D GPU/CPU illumination correction, stitching, deconvolution, extended depth of focus, registration, autofluorescence removal, segmentation, clustering, and spatial analysis.
Download and install conda-based environment management software. If you already have, skip to step 2.
Download miniconda: https://www.anaconda.com/download/success#miniconda.
Follow installation instructions.
Open a miniconda "Anaconda Prompt” terminal. You will be in the default “base” environment. Update the base environment and to set the default dependency solver to libmamba by copying and entering the following:
conda update -n base conda
conda install -n base conda-libmamba-solver
conda config --set solver libmamba
Change directory to your user folder:
cd C:\Users\[your user name]
Install “git” to the base environment:
conda install git -y
To download the code and associated files enter:
git clone https://github.com/smith6jt-cop/KINTSUGI.git
Change directory to enter the folder just downloaded
cd KINTSUGI
Create the environment by entering:
conda env create -f env.yml
Close the Anaconda PowerShell Prompt
There are necessary files that are too large to host on GitHub. To download the extra dependency zip file, use this link: https://drive.google.com/file/d/1_CoQ2o4iqc1HT9AlrlmrawKEt5A9dWJx/view?usp=sharing and extract contents. Extract each zipped file to the KINTSUGI folder. Alternatively, you may download, install, and configure maven3.9.9, java-jdk21, MatlabRuntime 2024a, PyVips-dev-8.16, and FIJI with the Clij2 plugin.
Download zip files and extract them to KINTSUGI folder.
Create a folder in the KINTSUGI folder called “data”.
To download test data, use Globus with your institution or Google/ORCID/Github account: https://app.globus.org/file-manager?origin_id=dce1f3d9-f067-11ef-a905-0207be7ee3a1&origin_path=%2F Alternatively download from the KINTSUGI Zenodo community: https://zenodo.org/communities/kintsugi/records?q=&l=list&p=2&s=10&sort=newest
Results of processing the test dataset can be found at: https://app.globus.org/file-manager?origin_id=10f408d9-f5ee-11ef-bf21-0affeb6b961d&origin_path=%2F
Move all image data to [your user folder]\KINTSUGI\data.
It is recommended to use VS Code to run the notebooks. Download and install VS Code https://code.visualstudio.com/.
Create a GitHub account if you do not have one. This is necessary for using the AI assistant CoPilot in VSCode.
Launch VS Code and sign in to GitHub.
Close VS Code.
Launch Anaconda PowerShell Prompt
Change the directory:
cd C:\Users\[username]\ KINTSUGI
Activate the environment by entering:
conda activate KINTSUGI
Launch VS Code by entering “code .” (the word ‘code’ followed by a space and a period) from your conda terminal in the activated KINTSUGI environment.
Launching VS Code from the activated environment in the Anaconda Prompt terminal is the only way KINTSUGI should be initiated. This will ensure all package functions are available.
If prompted to trust the authors of the files in the folder, check the box and click “Yes, I trust the authors.”
If prompted to Download or configure “Git” click “Do Not Show Again”.
If prompted to “install the recommended extensions ...” click “Install”.
1. Parameter tuning/testing For testing illumination correction, stitching, deconvolution, and EDoF.
2. Batch processing For batch processing illumination correction, stitching, deconvolution, EDoF, and registration.
3. Signal Isolation For autofluorescence subtraction, filtering, and final processing to isolate signal.
4. Segmentation For Instanseg segmentation, feature extraction, and spatial analysis.