Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ packages:

### (Optional) Step 6: Install OpenVINO™ GenAI (only for Ubuntu)

To use [gvagenai element](https://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/dlstreamer/elements/gvagenai.html)
To use [gvagenai element](https://docs.openedgeplatform.intel.com/2026.0/edge-ai-libraries/dlstreamer/elements/gvagenai.html)
you need to install the [OpenVINO GenAI archive](https://docs.openvino.ai/2026/get-started/install-openvino/install-openvino-genai.html) package.

<!--hide_directive::::{tab-set}
Expand Down
1 change: 0 additions & 1 deletion docs/source/dev_guide/custom_processing.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,6 @@ to the

**Current Support Limitations**


At this time, only **detection** and **classification** tasks are supported:

- **Object Detection** (`GstAnalyticsODMtd`) - works only with the
Expand Down
2 changes: 1 addition & 1 deletion docs/source/dev_guide/gpu_device_selection.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This article describes how to select a GPU device on a multi-GPU system.
In case of video decoding running on CPU and inference running on GPU, the
`device` property in inference elements enables you to select the GPU device
according to the
[OpenVINO™ GPU device naming convention](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html#device-naming-convention)
[OpenVINO™ GPU device naming convention](https://docs.openvino.ai/2026/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html#device-naming-convention)
, with devices enumerated as **GPU.0**, **GPU.1**, etc., for example:

```bash
Expand Down
4 changes: 2 additions & 2 deletions docs/source/dev_guide/lvms.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This article explains how to prepare models based on the [Hugging Face](https://huggingface.co/welcome) [`transformers`](https://github.com/huggingface/transformers) library for integration with the Deep Learning Streamer pipeline.

Many transformer-based models can be converted to OpenVINO™ IR format using [optimum-cli](https://huggingface.co/docs/optimum-intel/en/openvino/export). DL Streamer supports selected Hugging Face architectures for tasks such as image classification, object detection, audio transcription, and more. See the [Supported Models](https://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/dlstreamer/supported_models.html) table for details.
Many transformer-based models can be converted to OpenVINO™ IR format using [optimum-cli](https://huggingface.co/docs/optimum-intel/en/openvino/export). DL Streamer supports selected Hugging Face architectures for tasks such as image classification, object detection, audio transcription, and more. See the [Supported Models](https://docs.openedgeplatform.intel.com/2026.0/edge-ai-libraries/dlstreamer/supported_models.html) table for details.

> **NOTE:** The instructions below are comprehensive, but for convenience, we recommend using the
> [download_hf_models.py](https://github.com/open-edge-platform/dlstreamer/blob/main/scripts/download_models/download_hf_models.py)
Expand All @@ -12,7 +12,7 @@ Many transformer-based models can be converted to OpenVINO™ IR format using [o

## Optimum-Intel Supported Models

The list available [here](https://huggingface.co/docs/optimum-intel/en/openvino/models) includes models that can be converted to IR format with a single `optimum-cli` command. If a model architecture is [supported by DL Streamer](https://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/dlstreamer/supported_models.html#supported-architectures), it can typically be prepared as follows:
The list available [here](https://huggingface.co/docs/optimum-intel/en/openvino/models) includes models that can be converted to IR format with a single `optimum-cli` command. If a model architecture is [supported by DL Streamer](https://docs.openedgeplatform.intel.com/2026.0/edge-ai-libraries/dlstreamer/supported_models.html#supported-architectures), it can typically be prepared as follows:

```bash
optimum-cli export openvino --model provider_id/model_id --weight-format=int8 output_path
Expand Down
4 changes: 2 additions & 2 deletions docs/source/dev_guide/model_preparation.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ You can either:
[Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo)
(already in the IR format).
2. Use
[OpenVINO™ Toolkit Model Conversion](https://docs.openvino.ai/2024/openvino-workflow/model-preparation/convert-model-to-ir.html)
[OpenVINO™ Toolkit Model Conversion](https://docs.openvino.ai/2026/openvino-workflow/model-preparation/convert-model-to-ir.html)
method for converting your model from the training framework format
(e.g., TensorFlow) to the IR format.

Expand All @@ -52,7 +52,7 @@ When using a pre-trained model from Open Model Zoo, consider using the
tool to facilitate the model downloading process.

When converting a custom model, you can optionally utilize the
[Post-Training Model Optimization and Compression](https://docs.openvino.ai/2024/openvino-workflow/model-optimization.html)
[Post-Training Model Optimization and Compression](https://docs.openvino.ai/2026/openvino-workflow/model-optimization.html)
for converting the model into a performance efficient, and more
hardware-friendly representation. For example you can quantize it from 32-bit
floating point-precision into 8-bit integer precision. This gives a
Expand Down
2 changes: 1 addition & 1 deletion docs/source/dev_guide/openvino_custom_operations.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Before using custom operations, you need:
1. **OpenVINO™ Extension Library** - A compiled `.so` file (on Linux) containing the implementation of custom operations
2. **Model with Custom Operations** - An OpenVINO™ IR model that uses the custom operations defined in the extension library

For information on creating OpenVINO™ extension libraries, refer to the [OpenVINO™ Extensibility documentation](https://docs.openvino.ai/2025/documentation/openvino-extensibility.html).
For information on creating OpenVINO™ extension libraries, refer to the [OpenVINO™ Extensibility documentation](https://docs.openvino.ai/2026/documentation/openvino-extensibility.html).

## Usage

Expand Down
2 changes: 1 addition & 1 deletion docs/source/dev_guide/python_bindings.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ The *gvapython* element is implemented in "C" language as a normal GStreamer
element and it invokes Python functions via "C" interface (Python.h).
See the samples for the *gvapython* element in
[face_detection_and_classification](https://github.com/dlstreamer/dlstreamer/tree/main/samples/gstreamer/gst_launch/gvapython/face_detection_and_classification) folder.
[face_detection_and_classification](https://github.com/open-edge-platform/dlstreamer/tree/main/samples/gstreamer/gst_launch/gvapython/face_detection_and_classification) folder.
## 4. Performance considerations
Expand Down
2 changes: 1 addition & 1 deletion docs/source/dev_guide/yolo_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ integration with the Deep Learning Streamer pipeline.

## Ultralytics Model Preparation

All models supported by the [ultralytics/ultralytics](https://github.com/ultralytics/ultralytics) library can be converted to OpenVINO™ IR format by using the [Ultralytics exporter](https://docs.ultralytics.com/integrations/openvino/). DL Streamer supports many Ultralytics YOLO architectures for tasks such as zero-shot object detection, oriented object detection, segmentation, pose estimation, and more. See the [Supported Models](https://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/dlstreamer/supported_models.html) table for details.
All models supported by the [ultralytics/ultralytics](https://github.com/ultralytics/ultralytics) library can be converted to OpenVINO™ IR format by using the [Ultralytics exporter](https://docs.ultralytics.com/integrations/openvino/). DL Streamer supports many Ultralytics YOLO architectures for tasks such as zero-shot object detection, oriented object detection, segmentation, pose estimation, and more. See the [Supported Models](https://docs.openedgeplatform.intel.com/2026.0/edge-ai-libraries/dlstreamer/supported_models.html) table for details.

> **NOTE:** The instructions below are comprehensive, but for convenience, we recommend using the
> [download_ultralytics_models.py](https://github.com/open-edge-platform/dlstreamer/blob/main/scripts/download_models/download_ultralytics_models.py)
Expand Down
4 changes: 2 additions & 2 deletions docs/source/get_started/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,7 @@ Ubuntu.
If you want to use your own models, you first need to convert them
to the IR (Intermediate Representation) format. For detailed
instructions on how to convert models, look
[here](https://docs.openvino.ai/2025/openvino-workflow/model-preparation/convert-model-to-ir.html)
[here](https://docs.openvino.ai/2026/openvino-workflow/model-preparation/convert-model-to-ir.html)

4. Export the example video file path:

Expand Down Expand Up @@ -333,7 +333,7 @@ Follow these steps if you chose Option #2 (Docker) in Install Guide Ubuntu.

If you want to use your own models, first you need to convert them
in the IR (Intermediate Representation) format. For detailed
instructions on how to convert models, look [here](https://docs.openvino.ai/2025/openvino-workflow/model-preparation/convert-model-to-ir.html).
instructions on how to convert models, look [here](https://docs.openvino.ai/2026/openvino-workflow/model-preparation/convert-model-to-ir.html).

6. In the container, export the example video file path:

Expand Down
6 changes: 3 additions & 3 deletions docs/source/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ or at the Edge. DL Streamer consists of:
for designing, creating, building, and running media analytics
pipelines. It includes C++ and Python APIs.
- [Deep Learning Streamer Pipeline
Server](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/microservices/dlstreamer-pipeline-server)
Server](https://github.com/open-edge-platform/edge-ai-libraries/tree/release-2026.0.0/microservices/dlstreamer-pipeline-server)
for deploying and scaling media analytics pipelines as
micro-services on one or many compute nodes. It includes REST APIs
for pipelines management.
Expand Down Expand Up @@ -82,7 +82,7 @@ deploy, and benchmark. They require:
**DL Streamer** uses OpenVINO™ Runtime inference back-end,
optimized for Intel hardware platforms and supports over
[70 NN Intel and open-source community pre-trained models](https://github.com/open-edge-platform/dlstreamer/blob/main/docs/scripts/supported_models.json), and models converted
[from other training frameworks](https://docs.openvino.ai/2024/openvino-workflow/model-preparation/convert-model-to-ir.html).
[from other training frameworks](https://docs.openvino.ai/2026/openvino-workflow/model-preparation/convert-model-to-ir.html).
These models include object detection, object classification, human pose
detection, sound classification, semantic segmentation, and other use
cases: SSD, MobileNet, YOLO, Tiny YOLO, EfficientDet, ResNet,
Expand All @@ -92,7 +92,7 @@ FasterRCNN, and other models.
reference apps for the most common media analytics use cases. They are
included in
[Deep Learning Streamer Pipeline Framework](https://github.com/open-edge-platform/dlstreamer/tree/main),
[Deep Learning Streamer Pipeline Server](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/microservices/dlstreamer-pipeline-server),
[Deep Learning Streamer Pipeline Server](https://github.com/open-edge-platform/edge-ai-libraries/tree/release-2026.0.0/microservices/dlstreamer-pipeline-server),
[Open Visual Cloud](https://github.com/OpenVisualCloud), and
[Intel® Edge Software Hub](https://www.intel.com/content/www/us/en/edge-computing/edge-software-hub.html)
The samples demonstrate C++ and/or Python based: Action Recognition, Face Detection and
Expand Down
Loading
Loading