Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/test_and_deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ jobs:
fail-fast: false
matrix:
platform: [ubuntu-latest, windows-latest, macos-latest]
python-version: ["3.10", "3.11", "3.12", "3.13"]
python-version: ["3.11", "3.12", "3.13"]

steps:
- uses: actions/checkout@v6
Expand Down
4 changes: 4 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@ repos:
- id: trailing-whitespace
exclude: ^\.napari-hub/.*
- id: check-yaml # checks for correct yaml syntax for github actions ex.
exclude: |
(?x)(
|^tests/resources/Workflow/workflows/.*\.yaml$
)
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.14.10
hooks:
Expand Down
158 changes: 126 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,23 +9,26 @@
[![npe2](https://img.shields.io/badge/plugin-npe2-blue?link=https://napari.org/stable/plugins/index.html)](https://napari.org/stable/plugins/index.html)
[![Copier](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/copier-org/copier/master/img/badge/badge-grayscale-inverted-border-purple.json)](https://github.com/copier-org/copier)

reproducible processing workflows with napari
**Reproducible processing workflows with napari**

----------------------------------
A re-implementation of [napari-workflows](https://github.com/haesleinhuepf/napari-workflows) with backwards compatibility.

This [napari] plugin was generated with [copier] using the [napari-plugin-template] (main).
---

<!--
Don't miss the full getting started guide to set up your new package:
https://github.com/napari/napari-plugin-template#getting-started
This [napari] plugin was generated with [copier] using the [napari-plugin-template] (2.0.1).

and review the napari docs for plugin developers:
https://napari.org/stable/plugins/index.html
-->
## What is ndev-workflows?

## Installation
`ndev-workflows` is the workflow backend for napari image processing pipelines. It's a **drop-in replacement** for [napari-workflows](https://github.com/haesleinhuepf/napari-workflows) by Robert Haase, with these key improvements:

- **Safe YAML loading** — Uses `yaml.safe_load()` (no arbitrary code execution)
- **Backwards compatible** — Automatically loads and migrates legacy napari-workflows files, and detects missing dependencies
- **Same API** — Most code works without changes
- **Future-ready** — Designed for upcoming npe2 workflow contributions (WIP), without relying on npe1, napari-time-slicer, and napari-tools-menu for interactivity

---

You can install `ndev-workflows` via [pip]:
## Installation

```bash
pip install ndev-workflows
Expand All @@ -37,41 +40,132 @@ If napari is not already installed, you can install `ndev-workflows` with napari
pip install "ndev-workflows[all]"
```

---

## Quick Start

```python
from ndev_workflows import Workflow, save_workflow, load_workflow
from skimage.filters import gaussian

# Create workflow
workflow = Workflow()
workflow.set("blurred", gaussian, "input_image", sigma=2.0)
workflow.set("input_image", my_image)

# Execute
result = workflow.get("blurred")

# Save
save_workflow("pipeline.yaml", workflow, name="My Pipeline")

# Load and reuse
loaded = load_workflow("pipeline.yaml")
loaded.set("input_image", new_image)
result = loaded.get("blurred")
```

---

## YAML Format

Saved workflows use a safe, human-readable format:

```yaml
name: Nucleus Segmentation
description: Gaussian blur and thresholding
modified: '2025-12-22'

inputs:
- raw_image

outputs:
- labels

tasks:
blurred:
function: skimage.filters.gaussian
params:
arg0: raw_image
sigma: 2.0

labels:
function: skimage.measure.label
params:
arg0: blurred
```

**Key features:**

- No `!python/object` tags (safe to share)
- Functions imported by module path
- Params use `arg0`, `arg1`, etc. for positional args and keyword names for kwargs

To install latest development version:
**Legacy format**: Old napari-workflows YAML files are automatically detected and migrated when loaded.

---

## Important Notes

### Function Dependencies

⚠️ Workflows **don't bundle functions** — they only store module paths. Recipients need the same packages installed.

If loading fails with `WorkflowNotRunnableError`, install the missing package:

```bash
pip install git+https://github.com/ndev-kit/ndev-workflows.git
pip install scikit-image # for skimage functions
pip install napari-segment-blobs-and-things-with-membranes # for that plugin
```

### Lazy Loading

Inspect workflows without importing functions:

```python
workflow = load_workflow("untrusted.yaml", lazy=True)
print(workflow.tasks) # Safe - doesn't execute
```

---

## Integration

### Front-end plugins for interactive workflow building:

- [napari-assistant](https://github.com/haesleinhuepf/napari-assistant)
- [napari-workflow-optimizer](https://github.com/haesleinhuepf/napari-workflow-optimizer)
- [napari-workflow-inspector](https://github.com/haesleinhuepf/napari-workflow-inspector)

### Works with processing plugins:

- [napari-segment-blobs-and-things-with-membranes](https://www.napari-hub.org/plugins/napari-segment-blobs-and-things-with-membranes)
- [pyclesperanto](https://github.com/clesperanto/napari_pyclesperanto_assistant)
- And more!

---

## Contributing

Contributions are very welcome. Tests can be run with [tox], please ensure
the coverage at least stays the same before you submit a pull request.
```bash
git clone https://github.com/ndev-kit/ndev-workflows.git
cd ndev-workflows
uv venv
.venv\Scripts\activate
uv pip install -e . --group dev
pytest
```

---

## License

Distributed under the terms of the [BSD-3] license,
"ndev-workflows" is free and open source software
Fork of [napari-workflows](https://github.com/haesleinhuepf/napari-workflows) by Robert Haase.

## Issues

If you encounter any problems, please [file an issue] along with a detailed description.
---

[napari]: https://github.com/napari/napari
[copier]: https://copier.readthedocs.io/en/stable/
[MIT]: http://opensource.org/licenses/MIT
[BSD-3]: http://opensource.org/licenses/BSD-3-Clause
[GNU GPL v3.0]: http://www.gnu.org/licenses/gpl-3.0.txt
[GNU LGPL v3.0]: http://www.gnu.org/licenses/lgpl-3.0.txt
[Apache Software License 2.0]: http://www.apache.org/licenses/LICENSE-2.0
[Mozilla Public License 2.0]: https://www.mozilla.org/media/MPL/2.0/index.txt
[napari-plugin-template]: https://github.com/napari/napari-plugin-template

[file an issue]: https://github.com/ndev-kit/ndev-workflows/issues
## Issues

[tox]: https://tox.readthedocs.io/en/latest/
[pip]: https://pypi.org/project/pip/
[PyPI]: https://pypi.org/
[File an issue](https://github.com/ndev-kit/ndev-workflows/issues) with your environment details, YAML file (if applicable), and error messages.
13 changes: 11 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -17,19 +17,25 @@ classifiers = [
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Image Processing",
]
requires-python = ">=3.10"
requires-python = ">=3.11" # ndevio requires 3.11+
# napari can be included in dependencies if napari imports are required.
# However, you should not include napari[all], napari[qt],
# or any other Qt bindings directly (e.g. PyQt5, PySide2).
# See best practices: https://napari.org/stable/plugins/building_a_plugin/best_practices.html
dependencies = [
"napari",
"nbatch>=0.0.4",
"ndevio>=0.6.0",
"magicgui",
"magic-class",
"numpy",
"dask",
"pyyaml",
]

[project.optional-dependencies]
Expand All @@ -42,6 +48,9 @@ dev = [
"tox-uv",
"pytest", # https://docs.pytest.org/en/latest/contents.html
"pytest-cov", # https://pytest-cov.readthedocs.io/en/latest/
"pytest-qt",
"napari[pyqt6]",
"napari-segment-blobs-and-things-with-membranes", # TODO: adds 76 transient dependencies, yuck. currently for legacy/current sample test workflows. will try to remove in future
]

[project.entry-points."napari.manifest"]
Expand Down
30 changes: 29 additions & 1 deletion src/ndev_workflows/__init__.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,35 @@
"""ndev-workflows: Reproducible processing workflows with napari.

This package provides workflow management and batch processing for napari.
It is a fork of napari-workflows by Robert Haase (BSD-3-Clause license),
enhanced with:
- Safe YAML loading (no arbitrary code execution)
- Human-readable workflow format
- Integration with ndev-settings and nbatch
- npe2-native plugin architecture

Example
-------
>>> from ndev_workflows import Workflow, save_workflow, load_workflow
>>> w = Workflow()
>>> w.set("blurred", gaussian, "input", sigma=2.0)
>>> save_workflow("my_workflow.yaml", w, name="My Pipeline")
>>>
>>> loaded = load_workflow("my_workflow.yaml")
>>> loaded.set("input", image_data)
>>> result = loaded.get("blurred")
"""

try:
from ._version import version as __version__
except ImportError:
__version__ = 'unknown'

from ._io import load_workflow, save_workflow
from ._workflow import Workflow

__all__ = ()
__all__ = [
'Workflow',
'load_workflow',
'save_workflow',
]
105 changes: 105 additions & 0 deletions src/ndev_workflows/_batch.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
"""Batch-processing helpers for ndev-workflows."""

from __future__ import annotations

from pathlib import Path

from nbatch import batch


@batch(on_error='continue')
def process_workflow_file(
image_file: Path,
result_dir: Path,
workflow_file: Path,
root_index_list: list[int],
task_names: list[str],
keep_original_images: bool,
root_list: list[str],
squeezed_img_dims: str,
) -> Path:
"""Process a single image file through a workflow.

Loads a fresh workflow instance per file for thread safety.

Parameters
----------
image_file : Path
Path to the image file to process.
result_dir : Path
Directory to save results.
workflow_file : Path
Path to the workflow YAML file.
root_index_list : list[int]
Indices of channels to use as workflow roots.
task_names : list[str]
Names of workflow tasks to execute.
keep_original_images : bool
Whether to concatenate original images with results.
root_list : list[str]
Names of root channels (for output naming).
squeezed_img_dims : str
Squeezed dimension order of the image.

Returns
-------
Path
Path to the saved output file.
"""
import dask.array as da
import numpy as np
from bioio.writers import OmeTiffWriter
from bioio_base import transforms
from ndevio import nImage

from ._io import load_workflow
from ._spec import ensure_runnable

workflow = load_workflow(workflow_file, lazy=True)
workflow = ensure_runnable(workflow)

img = nImage(image_file)

# Capture roots before modifying workflow (stable list of graph inputs)
root_names = workflow.roots()

root_stack = []
for idx, root_index in enumerate(root_index_list):
if 'S' in img.dims.order:
root_img = img.get_image_data('TSZYX', S=root_index)
else:
root_img = img.get_image_data('TCZYX', C=root_index)

root_stack.append(root_img)
workflow.set(name=root_names[idx], func_or_data=np.squeeze(root_img))

result = workflow.get(name=task_names)

result_stack = np.asarray(result)
result_stack = transforms.reshape_data(
data=result_stack,
given_dims='C' + squeezed_img_dims,
return_dims='TCZYX',
)

if result_stack.dtype == np.int64:
result_stack = result_stack.astype(np.int32)

if keep_original_images:
dask_images = da.concatenate(root_stack, axis=1) # along "C"
result_stack = da.concatenate([dask_images, result_stack], axis=1)
result_names = root_list + task_names
else:
result_names = task_names

output_path = result_dir / (image_file.stem + '.tiff')
OmeTiffWriter.save(
data=result_stack,
uri=output_path,
dim_order='TCZYX',
channel_names=result_names,
image_name=image_file.stem,
physical_pixel_sizes=img.physical_pixel_sizes,
)

return output_path
Loading