diff --git a/docs/source/components/nodes/camera.rst b/docs/source/components/nodes/camera.rst new file mode 100644 index 000000000..32740baa8 --- /dev/null +++ b/docs/source/components/nodes/camera.rst @@ -0,0 +1,149 @@ +Camera +====== + +Camera node is a source of :ref:`image frames `. You can control in at runtime with the :code:`InputControl` and :code:`InputConfig`. +It aims to unify the :ref:`ColorCamera` and :ref:`MonoCamera` into one node. + +Compared to :ref:`ColorCamera` node, Camera node: + +- Supports **cam.setSize()**, which replaces both ``cam.setResolution()`` and ``cam.setIspScale()``. Camera node will automatically find resolution that fits best, and apply correct scaling to achieve user-selected size +- Supports **cam.setCalibrationAlpha()**, example here: :ref:`Undistort camera stream` +- Supports **cam.loadMeshData()** and **cam.setMeshStep()**, which can be used for custom image warping (undistortion, perspective correction, etc.) + +Besides points above, compared to :ref:`MonoCamera` node, Camera node: + +- Doesn't have ``out`` output, as it has the same outputs as :ref:`ColorCamera` (``raw``, ``isp``, ``still``, ``preview``, ``video``). This means that ``preview`` will output 3 planes of the same grayscale frame (3x overhead), and ``isp`` / ``video`` / ``still`` will output luma (useful grayscale information) + chroma (all values are 128), which will result in 1.5x bandwidth overhead + +How to place it +############### + +.. tabs:: + + .. code-tab:: py + + pipeline = dai.Pipeline() + cam = pipeline.create(dai.node.Camera) + + .. code-tab:: c++ + + dai::Pipeline pipeline; + auto cam = pipeline.create(); + + +Inputs and Outputs +################## + +.. code-block:: + + Camera node + ┌──────────────────────────────┐ + │ ┌─────────────┐ │ + │ │ Image │ raw │ raw + │ │ Sensor │---┬--------├────────► + │ └────▲────────┘ | │ + │ │ ┌--------┘ │ + │ ┌─┴───▼─┐ │ isp + inputControl │ │ │-------┬-------├────────► + ──────────────►│------│ ISP │ ┌─────▼────┐ │ video + │ │ │ | |--├────────► + │ └───────┘ │ Image │ │ still + inputConfig │ │ Post- │--├────────► + ──────────────►│----------------|Processing│ │ preview + │ │ │--├────────► + │ └──────────┘ │ + └──────────────────────────────┘ + +**Message types** + +- :code:`inputConfig` - :ref:`ImageManipConfig` +- :code:`inputControl` - :ref:`CameraControl` +- :code:`raw` - :ref:`ImgFrame` - RAW10 bayer data. Demo code for unpacking `here `__ +- :code:`isp` - :ref:`ImgFrame` - YUV420 planar (same as YU12/IYUV/I420) +- :code:`still` - :ref:`ImgFrame` - NV12, suitable for bigger size frames. The image gets created when a capture event is sent to the Camera, so it's like taking a photo +- :code:`preview` - :ref:`ImgFrame` - RGB (or BGR planar/interleaved if configured), mostly suited for small size previews and to feed the image into :ref:`NeuralNetwork` +- :code:`video` - :ref:`ImgFrame` - NV12, suitable for bigger size frames + +**ISP** (image signal processor) is used for bayer transformation, demosaicing, noise reduction, and other image enhancements. +It interacts with the 3A algorithms: **auto-focus**, **auto-exposure**, and **auto-white-balance**, which are handling image sensor +adjustments such as exposure time, sensitivity (ISO), and lens position (if the camera module has a motorized lens) at runtime. +Click `here `__ for more information. + +**Image Post-Processing** converts YUV420 planar frames from the **ISP** into :code:`video`/:code:`preview`/:code:`still` frames. + +``still`` (when a capture is triggered) and ``isp`` work at the max camera resolution, while ``video`` and ``preview`` are +limited to max 4K (3840 x 2160) resolution, which is cropped from ``isp``. +For IMX378 (12MP), the **post-processing** works like this: + +.. code-block:: + + ┌─────┐ Cropping to ┌─────────┐ Downscaling ┌──────────┐ + │ ISP ├────────────────►│ video ├───────────────►│ preview │ + └─────┘ max 3840x2160 └─────────┘ and cropping └──────────┘ + +.. image:: /_static/images/tutorials/isp.jpg + +The image above is the ``isp`` output from the Camera (12MP resolution from IMX378). If you aren't downscaling ISP, +the ``video`` output is cropped to 4k (max 3840x2160 due to the limitation of the ``video`` output) as represented by +the blue rectangle. The Yellow rectangle represents a cropped ``preview`` output when the preview size is set to a 1:1 aspect +ratio (eg. when using a 300x300 preview size for the MobileNet-SSD NN model) because the ``preview`` output is derived from +the ``video`` output. + +Usage +##### + +.. tabs:: + + .. code-tab:: py + + pipeline = dai.Pipeline() + cam = pipeline.create(dai.node.Camera) + cam.setPreviewSize(300, 300) + cam.setBoardSocket(dai.CameraBoardSocket.CAM_A) + # Instead of setting the resolution, user can specify size, which will set + # sensor resolution to best fit, and also apply scaling + cam.setSize(1280, 720) + + .. code-tab:: c++ + + dai::Pipeline pipeline; + auto cam = pipeline.create(); + cam->setPreviewSize(300, 300); + cam->setBoardSocket(dai::CameraBoardSocket::CAM_A); + // Instead of setting the resolution, user can specify size, which will set + // sensor resolution to best fit, and also apply scaling + cam->setSize(1280, 720); + +Limitations +########### + +Here are known camera limitations for the `RVC2 `__: + +- **ISP can process about 600 MP/s**, and about **500 MP/s** when the pipeline is also running NNs and video encoder in parallel +- **3A algorithms** can process about **200..250 FPS overall** (for all camera streams). This is a current limitation of our implementation, and we have plans for a workaround to run 3A algorithms on every Xth frame, no ETA yet + +Examples of functionality +######################### + +- :ref:`Undistort camera stream` + +Reference +######### + +.. tabs:: + + .. tab:: Python + + .. autoclass:: depthai.node.Camera + :members: + :inherited-members: + :noindex: + + .. tab:: C++ + + .. doxygenclass:: dai::node::Camera + :project: depthai-core + :members: + :private-members: + :undoc-members: + +.. include:: ../../includes/footer-short.rst diff --git a/docs/source/samples/Camera/camera_undistort.rst b/docs/source/samples/Camera/camera_undistort.rst new file mode 100644 index 000000000..eaa43da43 --- /dev/null +++ b/docs/source/samples/Camera/camera_undistort.rst @@ -0,0 +1,43 @@ +Undistort camera stream +======================= + +This example shows how you can use :ref:`Camera` node to undistort wide FOV camera stream. :ref:`Camera` node will automatically undistort ``still``, ``video`` and ``preview`` streams, while ``isp`` stream will be left as is. + +Demo +#### + +.. figure:: https://github.com/luxonis/depthai-python/assets/18037362/936b9ad7-179b-42a5-a6cb-25efdbdf73d9 + + Left: Camera.isp output. Right: Camera.video (undistorted) output + +Setup +##### + +.. include:: /includes/install_from_pypi.rst + +Source code +########### + +.. tabs:: + + .. tab:: Python + + Also `available on GitHub `__ + + .. literalinclude:: ../../../../examples/Camera/camera_undistort.py + :language: python + :linenos: + + .. tab:: C++ + + Work in progress. + + +.. + Also `available on GitHub `__ + + .. literalinclude:: ../../../../depthai-core/examples/Camera/camera_undistort.cpp + :language: cpp + :linenos: + +.. include:: /includes/footer-short.rst diff --git a/docs/source/tutorials/code_samples.rst b/docs/source/tutorials/code_samples.rst index 8d1af2a57..a5d9f3e2f 100644 --- a/docs/source/tutorials/code_samples.rst +++ b/docs/source/tutorials/code_samples.rst @@ -7,6 +7,7 @@ Code Samples ../samples/bootloader/* ../samples/calibration/* + ../samples/Camera/* ../samples/ColorCamera/* ../samples/crash_report/* ../samples/EdgeDetector/* @@ -44,6 +45,11 @@ are presented with code. - :ref:`Calibration Reader` - Reads calibration data stored on device over XLink - :ref:`Calibration Load` - Loads and uses calibration data of version 6 (gen2 calibration data) in a pipeline + +.. rubric:: Camera + +- :ref:`Undistort camera stream` - Showcases how Camera node undistorts camera streams + .. rubric:: ColorCamera - :ref:`Auto Exposure on ROI` - Demonstrates how to use auto exposure based on the selected ROI diff --git a/examples/Camera/camera_undistort.py b/examples/Camera/camera_undistort.py new file mode 100644 index 000000000..0d78f1d9e --- /dev/null +++ b/examples/Camera/camera_undistort.py @@ -0,0 +1,32 @@ +import depthai as dai +import cv2 + +pipeline = dai.Pipeline() + +# Define sources and outputs +camRgb: dai.node.Camera = pipeline.create(dai.node.Camera) + +#Properties +camRgb.setBoardSocket(dai.CameraBoardSocket.CAM_A) +camRgb.setSize((1280, 800)) + +# Linking +videoOut = pipeline.create(dai.node.XLinkOut) +videoOut.setStreamName("video") +camRgb.video.link(videoOut.input) + +ispOut = pipeline.create(dai.node.XLinkOut) +ispOut.setStreamName("isp") +camRgb.isp.link(ispOut.input) + +with dai.Device(pipeline) as device: + video = device.getOutputQueue(name="video", maxSize=1, blocking=False) + isp = device.getOutputQueue(name="isp", maxSize=1, blocking=False) + + while True: + if video.has(): + cv2.imshow("video", video.get().getCvFrame()) + if isp.has(): + cv2.imshow("isp", isp.get().getCvFrame()) + if cv2.waitKey(1) == ord('q'): + break