Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tiling and TilesPatcher #89

Merged
merged 5 commits into from
Oct 2, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
* @tersekmatija @jkbmrz @kkeroo @klemen1999
33 changes: 33 additions & 0 deletions .github/labeler.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
DevOps:
- changed-files:
- any-glob-to-any-file: ".github/*"
documentation:
- changed-files:
- any-glob-to-any-file: "docs/*"
examples:
- changed-files:
- any-glob-to-any-file: "examples/*"
messages:
- changed-files:
- any-glob-to-any-file: "depthai_nodes/ml/messages/*"
parsers:
- changed-files:
- any-glob-to-any-file: "depthai_nodes/ml/parsers/*"
tests:
- changed-files:
- any-glob-to-any-file: "tests/*"
enhancement:
- head-branch:
- 'feature/*'
- 'feat/*'
- 'enhancement/*'
fix:
- head-branch:
- 'fix/*'
- 'bug/*'
- 'hotfix/*'
- 'issue/*'
- 'bugfix/*'
- 'patch/*'
release:
- base-branch: 'main'
20 changes: 17 additions & 3 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,9 @@ on:
paths:
- 'depthai_nodes/**'
- 'tests/**'
- .github/workflows/ci.yaml
- 'examples/**'
- 'docs/**'
- .github/workflows/ci.yaml

permissions:
pull-requests: write
Expand All @@ -21,6 +22,19 @@ jobs:
- name: Auto-assign
uses: toshimaru/auto-author-assign@v2.1.1

labeler:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
ref: ${{ github.head_ref }}

- name: Labeler
uses: actions/labeler@v5
with:
configuration-path: .github/labeler.yaml

pre-commit:
runs-on: ubuntu-latest
steps:
Expand Down Expand Up @@ -109,8 +123,8 @@ jobs:
run: |
git config --global user.name 'GitHub Actions'
git config --global user.email 'actions@github.com'
git diff --quiet media/coverage_badge.svg || {
git add media/coverage_badge.svg
git add media/coverage_badge.svg
git diff --quiet --cached media/coverage_badge.svg || {
git commit -m "[Automated] Updated coverage badge"
}

Expand Down
38 changes: 38 additions & 0 deletions .github/workflows/python-publish.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
name: Upload Python Package

on:
workflow_dispatch:
release:
types: [published]

permissions:
contents: read

jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

- name: Set up Python
uses: actions/setup-python@v3
with:
python-version: '3.8'
cache: pip

- name: Install dependencies
run: pip install build pydoctor

- name: Generate docs
run: |
curl -L "https://raw.githubusercontent.com/luxonis/python-api-analyzer-to-json/main/gen-docs.py" -o "gen-docs.py"
python gen-docs.py depthai_nodes

- name: Build package
run: python -m build

- name: Publish package
uses: pypa/gh-action-pypi-publish@27b31702a0e7fc50959f5ad993c78deac1bdfc29
with:
user: __token__
password: ${{ secrets.PYPI_API_TOKEN }}
6 changes: 0 additions & 6 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,6 @@ repos:
additional_dependencies: [tomli]
args: [--in-place, --black, --style=epytext]

- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: no-commit-to-branch
args: ['--branch', 'main', '--branch', 'dev']

- repo: https://github.com/executablebooks/mdformat
rev: 0.7.10
hooks:
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ It outlines our workflow and standards for contributing to this project.

## Developing parser

Parser should be developed so that it is consistent with other parsers. Check out other parsers to see the required structure. Additionally, pay attention to the naming of the parser's attributes. Check out [NN Archive Parameters](docs/nn_archive_parameters.md).
Parser should be developed so that it is consistent with other parsers. Check out other parsers to see the required structure. Additionally, pay attention to the naming of the parser's attributes. Check out [Developer guide](docs/developer_guide.md).

## Pre-commit Hooks

Expand Down
13 changes: 10 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@

[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)

![CI](https://github.com/luxonis/depthai-nodes/actions/workflows/ci.yaml/badge.svg)
![Coverage](https://github.com/luxonis/depthai-nodes/blob/dev/media/coverage_badge.svg)

[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
[![Docformatter](https://img.shields.io/badge/%20formatter-docformatter-fedcba.svg)](https://github.com/PyCQA/docformatter)
[![Black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
Expand Down Expand Up @@ -33,18 +36,22 @@ To install the package, run:
pip install depthai-nodes
```

Before the official release on PyPI you can install the package from the GitHub repository:
### Manual installation

If you want to manually install the package from GitHub repositoory you can run:

```bash
git clone git@github.com:luxonis/depthai-nodes.git
```

and then install the requirements:
and then inside the directory run:

```bash
pip install -r requirements.txt
pip install .
```

Note: You'll still need to manually install `depthai v3`.

## Contributing

If you want to contribute to this project, read the instructions in [CONTRIBUTING.md](./CONTRIBUTING.md)
2 changes: 2 additions & 0 deletions depthai_nodes/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1,3 @@
from .ml.parsers import *

__version__ = "0.0.1"
59 changes: 40 additions & 19 deletions depthai_nodes/ml/messages/creators/depth.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,9 @@


def create_depth_message(
depth_map: np.ndarray, depth_type: Literal["relative", "metric"]
depth_map: np.ndarray,
depth_type: Literal["relative", "metric"],
depth_limit: float = 0.0,
) -> dai.ImgFrame:
"""Create a DepthAI message for a depth map.

Expand All @@ -17,23 +19,28 @@ def create_depth_message(
@param depth_type: A string indicating the type of depth map. It can either be
'relative' or 'metric'.
@type depth_type: Literal['relative', 'metric']
@param depth_limit: The maximum depth value (in meters) to be used in the depth map.
The default value is 0, which means no limit.
@type depth_limit: float
@return: An ImgFrame object containing the depth information.
@rtype: dai.ImgFrame
@raise ValueError: If the depth map is not a NumPy array.
@raise ValueError: If the depth map is not 2D or 3D.
@raise ValueError: If the depth map shape is not NHW or HWN.
@raise ValueError: If the depth type is not 'relative' or 'metric'.
@raise NotImplementedError: If the depth type is 'metric'.
@raise ValueError: If the depth limit is not 0 and the depth type is 'relative'.
@raise ValueError: If the depth limit is 0 and the depth type is 'metric'.
@raise ValueError: If the depth limit is negative.
"""

if not isinstance(depth_map, np.ndarray):
raise ValueError(f"Expected numpy array, got {type(depth_map)}.")

if len(depth_map.shape) == 3:
if depth_map.shape[0] == 1:
depth_map = depth_map[0, :, :] # CHW to HW
depth_map = depth_map[0, :, :] # NHW to HW
elif depth_map.shape[2] == 1:
depth_map = depth_map[:, :, 0] # HWC to HW
depth_map = depth_map[:, :, 0] # HWN to HW
else:
raise ValueError(
f"Unexpected image shape. Expected NHW or HWN, got {depth_map.shape}."
Expand All @@ -42,27 +49,41 @@ def create_depth_message(
if len(depth_map.shape) != 2:
raise ValueError(f"Expected 2D or 3D input, got {len(depth_map.shape)}D input.")

if depth_type == "relative":
data_type = dai.ImgFrame.Type.RAW16
if not (depth_type == "relative" or depth_type == "metric"):
raise ValueError(
f"Invalid depth type: {depth_type}. Only 'relative' and 'metric' are supported."
)

# normalize depth map to the range [0, 65535]
min_val = depth_map.min()
max_val = depth_map.max()
if min_val == max_val: # avoid division by zero
depth_map = np.zeros_like(depth_map)
else:
depth_map = (depth_map - min_val) / (max_val - min_val) * UINT16_MAX_VALUE
depth_map = depth_map.astype(np.uint16)
if depth_type == "relative" and depth_limit != 0:
raise ValueError(
f"Invalid depth limit: {depth_limit}. For relative depth, depth limit must be equal to 0."
)

elif depth_type == "metric":
raise NotImplementedError(
"The message for 'metric' depth type is not yet implemented."
if depth_type == "metric" and depth_limit == 0:
raise ValueError(
f"Invalid depth limit: {depth_limit}. For metric depth, depth limit must be bigger than 0."
)
else:

if depth_limit < 0:
raise ValueError(
f"Invalid depth type: {depth_type}. Only 'relative' and 'metric' are supported."
f"Invalid depth limit: {depth_limit}. Depth limit must be bigger than 0."
)

data_type = dai.ImgFrame.Type.RAW16

min_val = depth_map.min() if depth_type == "relative" else 0
max_val = depth_map.max() if depth_type == "relative" else depth_limit

# clip values bigger than max_val
depth_map = np.clip(depth_map, a_min=None, a_max=max_val)

# normalize depth map to UINT16 range [0, UINT16_MAX_VALUE]
if min_val == max_val: # avoid division by zero
depth_map = np.zeros_like(depth_map)
else:
depth_map = (depth_map - min_val) / (max_val - min_val) * UINT16_MAX_VALUE
depth_map = depth_map.astype(np.uint16)

imgFrame = dai.ImgFrame()
imgFrame.setFrame(depth_map)
imgFrame.setWidth(depth_map.shape[1])
Expand Down
51 changes: 39 additions & 12 deletions depthai_nodes/ml/parsers/mediapipe_palm_detection.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,8 @@ class MPPalmDetectionParser(dai.node.ThreadedHostNode):
Non-maximum suppression threshold.
max_det : int
Maximum number of detections to keep.
scale : int
Scale of the input image.

Output Message/s
-------
Expand All @@ -36,7 +38,7 @@ class MPPalmDetectionParser(dai.node.ThreadedHostNode):
https://ai.google.dev/edge/mediapipe/solutions/vision/hand_landmarker
"""

def __init__(self, conf_threshold=0.5, iou_threshold=0.5, max_det=100):
def __init__(self, conf_threshold=0.5, iou_threshold=0.5, max_det=100, scale=192):
"""Initializes the MPPalmDetectionParser node.

@param conf_threshold: Confidence score threshold for detected hands.
Expand All @@ -53,6 +55,7 @@ def __init__(self, conf_threshold=0.5, iou_threshold=0.5, max_det=100):
self.conf_threshold = conf_threshold
self.iou_threshold = iou_threshold
self.max_det = max_det
self.scale = scale

def setConfidenceThreshold(self, threshold):
"""Sets the confidence score threshold for detected hands.
Expand All @@ -78,26 +81,48 @@ def setMaxDetections(self, max_det):
"""
self.max_det = max_det

def setScale(self, scale):
"""Sets the scale of the input image.

@param scale: Scale of the input image.
@type scale: int
"""
self.scale = scale

def run(self):
while self.isRunning():
try:
output: dai.NNData = self.input.get()
except dai.MessageQueue.QueueException:
break # Pipeline was stopped

bboxes = (
output.getTensor("Identity", dequantize=True)
.reshape(2016, 18)
.astype(np.float32)
)
scores = (
output.getTensor("Identity_1", dequantize=True)
.reshape(2016)
.astype(np.float32)
)
all_tensors = output.getAllLayerNames()

bboxes = None
scores = None

for tensor_name in all_tensors:
tensor = output.getTensor(tensor_name, dequantize=True).astype(
np.float32
)
if bboxes is None:
bboxes = tensor
scores = tensor
else:
bboxes = bboxes if tensor.shape[-1] < bboxes.shape[-1] else tensor
scores = tensor if tensor.shape[-1] < scores.shape[-1] else scores

bboxes = bboxes.reshape(-1, 18)
scores = scores.reshape(-1)

if bboxes is None or scores is None:
raise ValueError("No valid output tensors found.")

decoded_bboxes = generate_anchors_and_decode(
bboxes=bboxes, scores=scores, threshold=self.conf_threshold, scale=192
bboxes=bboxes,
scores=scores,
threshold=self.conf_threshold,
scale=self.scale,
)

bboxes = []
Expand All @@ -123,6 +148,8 @@ def run(self):
bboxes = np.array(bboxes)[indices]
scores = np.array(scores)[indices]

bboxes = bboxes.astype(np.float32) / self.scale

detections_msg = create_detection_message(bboxes, scores, labels=None)
detections_msg.setTimestamp(output.getTimestamp())
self.out.send(detections_msg)
Loading
Loading