This repository is developed in Fish-IoT project
https://www.tequ.fi/en/project-bank/fish-iot/
This repository is collection of useful Node-RED subflows to work with Computer vision and cameras. Repository also contains example subflows to send data to Tequ API. Tequ API is created in Fish-IoT project to receive and archive images and videos and other data. Most of the examples can be used without access to Tequ API.
Tequ API documentation
Example how to use these subflows as a functional computer vision system
-
Windows 10: https://github.com/Lapland-UAS-Tequ/win10-nodered-tensorflow
-
Jetson: https://github.com/Lapland-UAS-Tequ/tequ-jetson-nodered-tensorflow/
-
NVIDIA Triton https://github.com/Lapland-UAS-Tequ/tequ-setup-triton-inference-server
Subflow | Version | Desc | JSON |
---|---|---|---|
[CAM] MPJEG stream | 0.0.1 | Connect to MPJEG-stream using url. | json |
[CAM] RPi HQ MJPEG | 0.0.1 | Stream MJPEG from RPi HQ-camera (raspistill, raspivid) | json |
[CAM] RPi libcamera | 0.0.1 | Stream MJPEG from RPi HQ-camera (libcamera) | json |
[AI] Detect-sm | 0.0.1 | Make prediction on image using Tensorflow SavedModel trained with tequ-tf2-ca-training-pipeline | json |
[AI] Detect-Triton | 0.0.1 | Make prediction on image using Tensorflow SavedModel hosted in NVIDIA Triton Inference Server | json |
[AI] Detect-acv | 0.0.1 | Make prediction on image using Tensorflow.js model trained and exported from Microsoft Azure Custom Vision | json |
[AI] Crop & TM | 0.0.1 | Crops results from '[AI] detect subflows' and classify cropped area(s) using Tensorflow.js model trained and exported from Google Teachable Machine. | json |
[IMG] Annotate | 0.0.1 | Annotates prediction results from [AI] Inference subflow. (uses sharp) | json |
[IMG] Annotate [TF] | 0.0.1 | Annotates prediction results from [AI] Inference subflow. (uses tfjs-node-gpu) | json |
[IMG] Thumbnails | 0.0.1 | Creates thumbnails of original image and annotated image. | json |
[IMG] Crop detected object(s) | 0.0.1 | Creates thumbnails of original image and annotated image. | json |
[API] Get Token | 0.0.1 | Retrieve token from Tequ-API. | json |
[API] Format data | 0.0.1 | Format data from [IMG] annotate | json |
[API] Send image | 0.0.1 | Send image to Tequ-API. Saves image to local filesystem if API is not available. | json |
[API] Add video clip | 0.0.1 | Send image to Tequ-API. Saves image to local filesystem if API is not available. | json |
[API] Operation | 0.0.1 | N/A | json |
Parse JPEG | 0.0.1 | Parse and pre-process JPEG image or image stream (uses sharp-library) | json |
Parse JPEG [TF] | 0.0.1 | Parse and pre-process JPEG image or image stream (uses tfjs-node-gpu) | json |
Pre-process [TF] | 0.0.1 | Pre-process image for Triton Inference Server using tfjs-node-gpu | json |
Thumbnail [TF] | 0.0.1 | Create thumbnail using tfjs-node-gpu | json |
Pre-process | 0.0.1 | Pre-process image for Triton Inference Server using numjs and piscina | json |
Post-process | 0.0.1 | Post-process response from Triton request | json |
gst jetson | 0.0.1 | Launch GStreamer pipeline to read data from Basler cameras | json |
gst wd | 0.0.1 | Watchdog to supervise that GStreamer pipeline is running | json |
Flow | Version | Desc | JSON |
---|---|---|---|
crop-ca | 0.0.1 | Process and crop Cloud Annotations project files. Sort images to folders named by annotation label. | json |
example-ai-detect-v2 | 0.0.1 | Use [AI] Detect-v2 and [IMG] Annotate | json |
example-ai-detect-sm | 0.0.1 | Use [AI] Detect-sm and [IMG] Annotate | json |
example-ai-detect-triton | 0.0.1 | Use [AI] Detect-triton and [IMG] Annotate | json |
example-receive-video-and-send | 0.0.1 | Receive Videoclips and send to Tequ-API | json |
example-ai-detect-acv | 0.0.1 | Use [AI] Detect-acv and [IMG] Annotate [TF] | json |
example-ai-detect-custom-vision-docker | 0.0.1 | Use Custom Vision Docker & [IMG] Annotate [TF] | json |