Skip to content

Commit

Permalink
Merge branch 'rvc3_merge' into rvc3_side_channel
Browse files Browse the repository at this point in the history
  • Loading branch information
Matevz Morato committed Dec 14, 2023
2 parents 4ff8c83 + dca4106 commit 9049204
Show file tree
Hide file tree
Showing 56 changed files with 1,038 additions and 194 deletions.
15 changes: 10 additions & 5 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -153,8 +153,13 @@ jobs:
run: echo "BUILD_COMMIT_HASH=${{github.sha}}" >> $GITHUB_ENV
- name: Building wheel
run: python3 -m pip wheel . -w ./wheelhouse/ --verbose
- name: Auditing wheel
run: for whl in wheelhouse/*.whl; do auditwheel repair "$whl" --plat linux_armv7l -w wheelhouse/audited/; done
- name: Auditing wheels and adding armv6l tag (Running on RPi, binaries compiled as armv6l)
run: |
python3 -m pip install -U wheel auditwheel
for whl in wheelhouse/*.whl; do auditwheel repair "$whl" --plat linux_armv7l -w wheelhouse/preaudited/; done
for whl in wheelhouse/preaudited/*.whl; do python3 -m wheel tags --platform-tag +linux_armv6l "$whl"; done
mkdir -p wheelhouse/audited/
for whl in wheelhouse/preaudited/*linux_armv6l*.whl; do cp "$whl" wheelhouse/audited/$(basename $whl); done
- name: Archive wheel artifacts
uses: actions/upload-artifact@v3
with:
Expand Down Expand Up @@ -559,13 +564,13 @@ jobs:
uses: codex-/return-dispatch@v1
id: return_dispatch
with:
token: ${{ secrets.HIL_CORE_DISPATCH_TOKEN }} # Note this is NOT GITHUB_TOKEN but a PAT
token: ${{ secrets.HIL_CORE_DISPATCH_TOKEN }} # Note this is NOT GITHUB_TOKEN but a PAT
ref: main # or refs/heads/target_branch
repo: depthai-core-hil-tests
owner: luxonis
workflow: regression_test.yml
workflow_inputs: '{"commit": "${{ github.ref }}", "sha": "${{ github.sha }}", "parent_url": "https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}"}'
workflow_timeout_seconds: 120 # Default: 300
workflow_inputs: '{"commit": "${{ github.ref }}", "parent_url": "https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}"}'
workflow_timeout_seconds: 300 # was 120 Default: 300

- name: Release
run: echo "https://github.com/luxonis/depthai-core-hil-tests/actions/runs/${{steps.return_dispatch.outputs.run_id}}" >> $GITHUB_STEP_SUMMARY
5 changes: 3 additions & 2 deletions .readthedocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,9 @@
version: 2

build:
image: latest
os: ubuntu-20.04
tools:
python: "3.8"

# Submodules
submodules:
Expand All @@ -29,6 +31,5 @@ formats: []

# Optionally set the version of Python and requirements required to build your docs
python:
version: 3.8
install:
- requirements: docs/readthedocs/requirements.txt
3 changes: 2 additions & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -172,8 +172,9 @@ if(WIN32)
set(depthai_dll_libraries "$<TARGET_RUNTIME_DLLS:${TARGET_NAME}>")
endif()
add_custom_command(TARGET ${TARGET_NAME} POST_BUILD COMMAND
${CMAKE_COMMAND} -E copy ${depthai_dll_libraries} $<TARGET_FILE_DIR:${TARGET_NAME}>
"$<$<BOOL:${depthai_dll_libraries}>:${CMAKE_COMMAND};-E;copy_if_different;${depthai_dll_libraries};$<TARGET_FILE_DIR:${TARGET_NAME}>>"
COMMAND_EXPAND_LISTS
VERBATIM
)

# Disable "d" postfix, so python can import the library as is
Expand Down
2 changes: 1 addition & 1 deletion depthai-core
Submodule depthai-core updated 56 files
+12 −4 CMakeLists.txt
+1 −1 cmake/Depthai/DepthaiDeviceKbConfig.cmake
+5 −3 cmake/Hunter/config.cmake
+23 −0 cmake/config.hpp.in
+3 −1 examples/CMakeLists.txt
+1 −1 examples/ColorCamera/rgb_preview.cpp
+43 −0 examples/Script/script_read_calibration.cpp
+2 −2 examples/Warp/warp_mesh.cpp
+1 −0 include/depthai/common/CameraFeatures.hpp
+31 −0 include/depthai/device/CalibrationHandler.hpp
+9 −6 include/depthai/device/DeviceBase.hpp
+0 −16 include/depthai/pipeline/datatype/AprilTags.hpp
+32 −0 include/depthai/pipeline/datatype/Buffer.hpp
+9 −0 include/depthai/pipeline/datatype/ImageManipConfig.hpp
+0 −16 include/depthai/pipeline/datatype/ImgDetections.hpp
+2 −17 include/depthai/pipeline/datatype/ImgFrame.hpp
+0 −16 include/depthai/pipeline/datatype/NNData.hpp
+0 −16 include/depthai/pipeline/datatype/SpatialImgDetections.hpp
+0 −16 include/depthai/pipeline/datatype/SpatialLocationCalculatorData.hpp
+4 −0 include/depthai/pipeline/datatype/ToFConfig.hpp
+0 −16 include/depthai/pipeline/datatype/TrackedFeatures.hpp
+0 −16 include/depthai/pipeline/datatype/Tracklets.hpp
+1 −1 include/depthai/pipeline/node/Camera.hpp
+3 −1 include/depthai/pipeline/node/ColorCamera.hpp
+2 −2 include/depthai/pipeline/node/Sync.hpp
+2 −2 include/depthai/pipeline/node/Warp.hpp
+4 −1 include/depthai/xlink/XLinkStream.hpp
+1 −1 shared/depthai-shared
+39 −1 src/device/CalibrationHandler.cpp
+1 −1 src/device/Device.cpp
+55 −65 src/device/DeviceBase.cpp
+1 −1 src/openvino/BlobReader.cpp
+3 −25 src/pipeline/datatype/AprilTags.cpp
+35 −0 src/pipeline/datatype/Buffer.cpp
+10 −0 src/pipeline/datatype/ImageManipConfig.cpp
+3 −25 src/pipeline/datatype/ImgDetections.cpp
+3 −23 src/pipeline/datatype/ImgFrame.cpp
+3 −25 src/pipeline/datatype/NNData.cpp
+3 −25 src/pipeline/datatype/SpatialImgDetections.cpp
+3 −25 src/pipeline/datatype/SpatialLocationCalculatorData.cpp
+4 −11 src/pipeline/datatype/StreamMessageParser.cpp
+5 −0 src/pipeline/datatype/ToFConfig.cpp
+3 −25 src/pipeline/datatype/TrackedFeatures.cpp
+3 −25 src/pipeline/datatype/Tracklets.cpp
+1 −1 src/pipeline/node/Camera.cpp
+22 −0 src/pipeline/node/ColorCamera.cpp
+2 −2 src/pipeline/node/Warp.cpp
+85 −0 src/utility/EepromDataParser.cpp
+29 −0 src/utility/EepromDataParser.hpp
+11 −3 src/utility/Initialization.cpp
+2 −2 src/utility/Path.cpp
+1 −1 src/xlink/XLinkConnection.cpp
+19 −1 src/xlink/XLinkStream.cpp
+5 −1 tests/CMakeLists.txt
+33 −0 tests/src/device_usbspeed_test.cpp
+113 −0 tests/src/naming_test.cpp
2 changes: 1 addition & 1 deletion docs/requirements_mkdoc.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
git+https://github.com/luxonis/pybind11_mkdoc.git@da6c64251a0ebbc3ffc007477a0b9c9f20cac165
libclang==16.0.6
numpy # Needed because of xtensor-python
numpy # Needed because of xtensor-python
10 changes: 5 additions & 5 deletions docs/source/_static/install_depthai.sh
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ do
if [[ "$python_version" == "" ]]; then
echo "No python version found."
echo "Input path for python binary, version 3.8 or higher, or leave empty and python 3.10 will be installed for you."
echo "Press any key to continue"
echo "Press ENTER key to continue"
read -e python_binary_path < /dev/tty
# python not found and user wants to install python 3.10
if [ "$python_binary_path" = "" ]; then
Expand All @@ -66,16 +66,16 @@ do
nr_2=$(echo "${python_version:9:2}" | tr -d -c 0-9)
echo "Python version: $python_version found."
if [ "$nr_1" -gt 2 ] && [ "$nr_2" -gt 7 ]; then # first two digits of python version greater then 3.7 -> python version 3.8 or greater is allowed.
echo "If you want to use it for installation, press ANY key, otherwise input path to python binary."
echo "Press any key to continue"
echo "If you want to use it for installation, press ENTER key, otherwise input path to python binary."
echo "Press ENTER key to continue"
read -e python_binary_path < /dev/tty
# user wants to use already installed python whose version is high enough
if [ "$python_binary_path" = "" ]; then
python_chosen="true"
fi
else
echo "This python version is not supported by depthai. Enter path to python binary version et least 3.8, or leave empty and python 3.10 will be installed automatically."
echo "Press any key to continue"
echo "Press ENTER key to continue"
read -e python_binary_path < /dev/tty
# python version is too low and user wants to install python 3.10
if [ "$python_binary_path" = "" ]; then
Expand Down Expand Up @@ -241,7 +241,7 @@ fi

echo -e '\n\n:::::::::::::::: INSTALATION COMPLETE ::::::::::::::::\n'
echo -e '\nTo run demo app write <depthai_launcher> in terminal.'
echo "Press ANY KEY to finish and run the demo app..."
echo "Press ENTER KEY to finish and run the demo app..."
read -n1 key < /dev/tty
echo "STARTING DEMO APP."
python "$DEPTHAI_DIR/launcher/launcher.py" -r "$DEPTHAI_DIR"
14 changes: 12 additions & 2 deletions docs/source/components/bootloader.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ Bootloader
==========

DepthAI bootloader is a small program which handles the booting process, either by **booting the flashed application** (see :ref:`Standalone mode`),
or by **initializing the OAK PoE camera** so DepthAI API can connect to it. OAK PoE cameras already come with bootloader flashed at the factory.
or by **initializing the OAK PoE camera** so DepthAI API can connect to it. OAK PoE cameras already come with factory bootloader flashed at the factory.

Bootloader is part of the ``depthai`` library, so to eg. flash the newest bootloader, you should use the newest ``depthai`` library.
Bootloader is bundled inside with the ``depthai`` library, so to eg. flash the newest bootloader, you should use the newest ``depthai`` library.

Device Manager
##############
Expand Down Expand Up @@ -81,6 +81,16 @@ Device Manager will try to flash the user bootloader first, if flashed (factory)
* **Factory reset** will erase the whole flash content and re-flash it with only the USB or NETWORK bootloader. Flashed application (pipeline, assets) and bootloader configurations will be lost.
* **Boot into USB recovery mode** will force eg. OAK PoE camera to be available through the USB connector, even if its boot pins are set to PoE booting. It is mostly used by our firmware developers.

Factory and User bootloader
###########################

There are two types of bootloaders:

- **Factory bootloader**: bootloader that is flashed in the factory. We don't recommend re-flashing this bootloader, as it is not meant to be edited by end users.
- **User bootloader**: bootloader that can be flashed by the user. If booting is unsuccessful (eg. gets corrupted when flashing), it will fallback to factory bootloader.

USB devices don't support user bootloader. If device has User bootloader, it will be used by default. If user bootloader is not flashed, it will fallback to Factory bootloader.

Boot switches
#############

Expand Down
149 changes: 149 additions & 0 deletions docs/source/components/nodes/camera.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,149 @@
Camera
======

Camera node is a source of :ref:`image frames <ImgFrame>`. You can control in at runtime with the :code:`InputControl` and :code:`InputConfig`.
It aims to unify the :ref:`ColorCamera` and :ref:`MonoCamera` into one node.

Compared to :ref:`ColorCamera` node, Camera node:

- Supports **cam.setSize()**, which replaces both ``cam.setResolution()`` and ``cam.setIspScale()``. Camera node will automatically find resolution that fits best, and apply correct scaling to achieve user-selected size
- Supports **cam.setCalibrationAlpha()**, example here: :ref:`Undistort camera stream`
- Supports **cam.loadMeshData()** and **cam.setMeshStep()**, which can be used for custom image warping (undistortion, perspective correction, etc.)

Besides points above, compared to :ref:`MonoCamera` node, Camera node:

- Doesn't have ``out`` output, as it has the same outputs as :ref:`ColorCamera` (``raw``, ``isp``, ``still``, ``preview``, ``video``). This means that ``preview`` will output 3 planes of the same grayscale frame (3x overhead), and ``isp`` / ``video`` / ``still`` will output luma (useful grayscale information) + chroma (all values are 128), which will result in 1.5x bandwidth overhead

How to place it
###############

.. tabs::

.. code-tab:: py

pipeline = dai.Pipeline()
cam = pipeline.create(dai.node.Camera)

.. code-tab:: c++

dai::Pipeline pipeline;
auto cam = pipeline.create<dai::node::Camera>();


Inputs and Outputs
##################

.. code-block::
Camera node
┌──────────────────────────────┐
│ ┌─────────────┐ │
│ │ Image │ raw │ raw
│ │ Sensor │---┬--------├────────►
│ └────▲────────┘ | │
│ │ ┌--------┘ │
│ ┌─┴───▼─┐ │ isp
inputControl │ │ │-------┬-------├────────►
──────────────►│------│ ISP │ ┌─────▼────┐ │ video
│ │ │ | |--├────────►
│ └───────┘ │ Image │ │ still
inputConfig │ │ Post- │--├────────►
──────────────►│----------------|Processing│ │ preview
│ │ │--├────────►
│ └──────────┘ │
└──────────────────────────────┘
**Message types**

- :code:`inputConfig` - :ref:`ImageManipConfig`
- :code:`inputControl` - :ref:`CameraControl`
- :code:`raw` - :ref:`ImgFrame` - RAW10 bayer data. Demo code for unpacking `here <https://github.com/luxonis/depthai-experiments/blob/3f1b2b2/gen2-color-isp-raw/main.py#L13-L32>`__
- :code:`isp` - :ref:`ImgFrame` - YUV420 planar (same as YU12/IYUV/I420)
- :code:`still` - :ref:`ImgFrame` - NV12, suitable for bigger size frames. The image gets created when a capture event is sent to the Camera, so it's like taking a photo
- :code:`preview` - :ref:`ImgFrame` - RGB (or BGR planar/interleaved if configured), mostly suited for small size previews and to feed the image into :ref:`NeuralNetwork`
- :code:`video` - :ref:`ImgFrame` - NV12, suitable for bigger size frames

**ISP** (image signal processor) is used for bayer transformation, demosaicing, noise reduction, and other image enhancements.
It interacts with the 3A algorithms: **auto-focus**, **auto-exposure**, and **auto-white-balance**, which are handling image sensor
adjustments such as exposure time, sensitivity (ISO), and lens position (if the camera module has a motorized lens) at runtime.
Click `here <https://en.wikipedia.org/wiki/Image_processor>`__ for more information.

**Image Post-Processing** converts YUV420 planar frames from the **ISP** into :code:`video`/:code:`preview`/:code:`still` frames.

``still`` (when a capture is triggered) and ``isp`` work at the max camera resolution, while ``video`` and ``preview`` are
limited to max 4K (3840 x 2160) resolution, which is cropped from ``isp``.
For IMX378 (12MP), the **post-processing** works like this:

.. code-block::
┌─────┐ Cropping to ┌─────────┐ Downscaling ┌──────────┐
│ ISP ├────────────────►│ video ├───────────────►│ preview │
└─────┘ max 3840x2160 └─────────┘ and cropping └──────────┘
.. image:: /_static/images/tutorials/isp.jpg

The image above is the ``isp`` output from the Camera (12MP resolution from IMX378). If you aren't downscaling ISP,
the ``video`` output is cropped to 4k (max 3840x2160 due to the limitation of the ``video`` output) as represented by
the blue rectangle. The Yellow rectangle represents a cropped ``preview`` output when the preview size is set to a 1:1 aspect
ratio (eg. when using a 300x300 preview size for the MobileNet-SSD NN model) because the ``preview`` output is derived from
the ``video`` output.

Usage
#####

.. tabs::

.. code-tab:: py

pipeline = dai.Pipeline()
cam = pipeline.create(dai.node.Camera)
cam.setPreviewSize(300, 300)
cam.setBoardSocket(dai.CameraBoardSocket.CAM_A)
# Instead of setting the resolution, user can specify size, which will set
# sensor resolution to best fit, and also apply scaling
cam.setSize(1280, 720)

.. code-tab:: c++

dai::Pipeline pipeline;
auto cam = pipeline.create<dai::node::Camera>();
cam->setPreviewSize(300, 300);
cam->setBoardSocket(dai::CameraBoardSocket::CAM_A);
// Instead of setting the resolution, user can specify size, which will set
// sensor resolution to best fit, and also apply scaling
cam->setSize(1280, 720);

Limitations
###########

Here are known camera limitations for the `RVC2 <https://docs.luxonis.com/projects/hardware/en/latest/pages/rvc/rvc2.html#rvc2>`__:

- **ISP can process about 600 MP/s**, and about **500 MP/s** when the pipeline is also running NNs and video encoder in parallel
- **3A algorithms** can process about **200..250 FPS overall** (for all camera streams). This is a current limitation of our implementation, and we have plans for a workaround to run 3A algorithms on every Xth frame, no ETA yet

Examples of functionality
#########################

- :ref:`Undistort camera stream`

Reference
#########

.. tabs::

.. tab:: Python

.. autoclass:: depthai.node.Camera
:members:
:inherited-members:
:noindex:

.. tab:: C++

.. doxygenclass:: dai::node::Camera
:project: depthai-core
:members:
:private-members:
:undoc-members:

.. include:: ../../includes/footer-short.rst
2 changes: 1 addition & 1 deletion docs/source/components/nodes/imu.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
IMU
===

IMU (`intertial measurement unit <https://en.wikipedia.org/wiki/Inertial_measurement_unit>`__) node can be used to receive data
IMU (`inertial measurement unit <https://en.wikipedia.org/wiki/Inertial_measurement_unit>`__) node can be used to receive data
from the IMU chip on the device. Our OAK devices use either:

- `BNO085 <https://www.ceva-dsp.com/product/bno080-085/>`__ (`datasheet here <https://www.ceva-dsp.com/wp-content/uploads/2019/10/BNO080_085-Datasheet.pdf>`__) 9-axis sensor, combining accelerometer, gyroscope, and magnetometer. It also does sensor fusion on the (IMU) chip itself. We have efficiently integrated `this driver <https://github.com/hcrest/bno080-driver>`__ into the DepthAI.
Expand Down
6 changes: 2 additions & 4 deletions docs/source/components/nodes/video_encoder.rst
Original file line number Diff line number Diff line change
Expand Up @@ -81,13 +81,11 @@ Limitations
###########

For **H.264 / H.265 encoding**, we have the following limits:
- **248 million pixels/second** limit for the encoder or 3840x2160 pixels at 30FPS. The resolution and frame rate can be divided into multiple streams - but the sum of all the pixels/second needs to be below 248 million.
- **248 million pixels/second** (4K@30) limit for the encoder. The resolution and frame rate can be divided into multiple streams - but the sum of all the pixels/second needs to be below 248 million.
- Due to a HW constraint, video encoding can be done only on frames whose width values are multiples of 32.
- 4096 pixel max width for a frame.
- Maximum of 3 parallel encoding streams.

The **MJPEG encoder** is capable of 16384x8192 resolution at 500Mpixel/second. From our testing, we were able to encode
4K at 30FPS and 2x 800P at 55FPS.
The **MJPEG encoder** is capable of 16384x8192 resolution at 450 MPix/sec. From our testing, we were able to encode 4K at 30FPS and 2x 800P at 55FPS.

Note the processing resources of the encoder **are shared** between H.26x and JPEG.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/components/nodes/warp.rst
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ Usage
# Warp engines to be used (0,1,2)
warp.setHwIds([1])
# Warp interpolation mode, choose between BILINEAR, BICUBIC, BYPASS
warp.setInterpolation(dai.node.Warp.Properties.Interpolation.BYPASS)
warp.setInterpolation(dai.Interpolation.NEAREST_NEIGHBOR)

.. code-tab:: c++

Expand Down
43 changes: 43 additions & 0 deletions docs/source/samples/Camera/camera_undistort.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
Undistort camera stream
=======================

This example shows how you can use :ref:`Camera` node to undistort wide FOV camera stream. :ref:`Camera` node will automatically undistort ``still``, ``video`` and ``preview`` streams, while ``isp`` stream will be left as is.

Demo
####

.. figure:: https://github.com/luxonis/depthai-python/assets/18037362/936b9ad7-179b-42a5-a6cb-25efdbdf73d9

Left: Camera.isp output. Right: Camera.video (undistorted) output

Setup
#####

.. include:: /includes/install_from_pypi.rst

Source code
###########

.. tabs::

.. tab:: Python

Also `available on GitHub <https://github.com/luxonis/depthai-python/blob/main/examples/Camera/camera_undistort.py>`__

.. literalinclude:: ../../../../examples/Camera/camera_undistort.py
:language: python
:linenos:

.. tab:: C++

Work in progress.


..
Also `available on GitHub <https://github.com/luxonis/depthai-core/blob/main/examples/Camera/camera_undistort.cpp>`__
.. literalinclude:: ../../../../depthai-core/examples/Camera/camera_undistort.cpp
:language: cpp
:linenos:

.. include:: /includes/footer-short.rst
6 changes: 4 additions & 2 deletions docs/source/samples/bootloader/bootloader_version.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,10 @@ Example script output
.. code-block:: bash
~/depthai-python/examples$ python3 bootloader_version.py
Found device with name: 14442C10D1789ACD00-ma2480
Version: 0.0.15
Found device with name: 1.1
Version: 0.0.26
USB Bootloader - supports only Flash memory
Memory 'Memory.FLASH' size: 33554432, info: JEDEC ID: 01 02 19
Setup
#####
Expand Down
Loading

0 comments on commit 9049204

Please sign in to comment.