diff --git a/docs/articles_en/learn-openvino/llm_inference_guide/genai-guide.rst b/docs/articles_en/learn-openvino/llm_inference_guide/genai-guide.rst index dbc5d3c4416cd4..a6131c0df34a1a 100644 --- a/docs/articles_en/learn-openvino/llm_inference_guide/genai-guide.rst +++ b/docs/articles_en/learn-openvino/llm_inference_guide/genai-guide.rst @@ -123,7 +123,7 @@ make sure to :doc:`install OpenVINO with GenAI <../../get-started/install-openvi For more information, refer to the - `Python sample `__ + `Python sample `__ .. tab-item:: C++ :sync: cpp @@ -225,7 +225,7 @@ make sure to :doc:`install OpenVINO with GenAI <../../get-started/install-openvi } For more information, refer to the - `C++ sample `__ + `C++ sample `__ .. dropdown:: Speech Recognition @@ -271,7 +271,7 @@ make sure to :doc:`install OpenVINO with GenAI <../../get-started/install-openvi For more information, refer to the - `Python sample `__. + `Python sample `__. .. tab-item:: C++ :sync: cpp @@ -323,7 +323,7 @@ make sure to :doc:`install OpenVINO with GenAI <../../get-started/install-openvi } For more information, refer to the - `C++ sample `__. + `C++ sample `__. .. dropdown:: Using GenAI in Chat Scenario @@ -367,7 +367,7 @@ make sure to :doc:`install OpenVINO with GenAI <../../get-started/install-openvi For more information, refer to the - `Python sample `__. + `Python sample `__. .. tab-item:: C++ :sync: cpp @@ -415,7 +415,7 @@ make sure to :doc:`install OpenVINO with GenAI <../../get-started/install-openvi For more information, refer to the - `C++ sample `__ + `C++ sample `__ .. dropdown:: Using GenAI with Vision Language Models @@ -483,7 +483,7 @@ make sure to :doc:`install OpenVINO with GenAI <../../get-started/install-openvi For more information, refer to the - `Python sample `__. + `Python sample `__. .. tab-item:: C++ :sync: cpp @@ -549,7 +549,7 @@ make sure to :doc:`install OpenVINO with GenAI <../../get-started/install-openvi For more information, refer to the - `C++ sample `__ + `C++ sample `__ | @@ -803,7 +803,7 @@ runs prediction of the next K tokens, thus repeating the cycle. For more information, refer to the - `Python sample `__. + `Python sample `__. .. tab-item:: C++ @@ -859,7 +859,7 @@ runs prediction of the next K tokens, thus repeating the cycle. For more information, refer to the - `C++ sample `__ + `C++ sample `__ diff --git a/docs/articles_en/learn-openvino/llm_inference_guide/ov-tokenizers.rst b/docs/articles_en/learn-openvino/llm_inference_guide/ov-tokenizers.rst index 2064aa843a93d8..339d8e814ace73 100644 --- a/docs/articles_en/learn-openvino/llm_inference_guide/ov-tokenizers.rst +++ b/docs/articles_en/learn-openvino/llm_inference_guide/ov-tokenizers.rst @@ -336,7 +336,7 @@ Additional Resources * `OpenVINO Tokenizers repo `__ * `OpenVINO Tokenizers Notebook `__ -* `Text generation C++ samples that support most popular models like LLaMA 3 `__ +* `Text generation C++ samples that support most popular models like LLaMA 3 `__ * `OpenVINO GenAI Repo `__ diff --git a/docs/articles_en/openvino-workflow/running-inference/string-tensors.rst b/docs/articles_en/openvino-workflow/running-inference/string-tensors.rst index 3032add547f8a8..3bd8c3e04499b0 100644 --- a/docs/articles_en/openvino-workflow/running-inference/string-tensors.rst +++ b/docs/articles_en/openvino-workflow/running-inference/string-tensors.rst @@ -12,7 +12,7 @@ Such a tensor is called a string tensor and can be passed as input or retrieved While this section describes basic API to handle string tensors, more practical examples that leverage both string tensors and OpenVINO tokenizer can be found in -`GenAI Samples `__. +`GenAI Samples `__. Representation @@ -203,4 +203,4 @@ Additional Resources * Use `OpenVINO tokenizers `__ to produce models that use string tensors to work with textual information as pre- and post-processing for the large language models. -* Check out `GenAI Samples `__ to see how string tensors are used in real-life applications. +* Check out `GenAI Samples `__ to see how string tensors are used in real-life applications. diff --git a/docs/notebooks/openvino-tokenizers-with-output.rst b/docs/notebooks/openvino-tokenizers-with-output.rst index 354a51c4180fa6..3f78b5a82ff46f 100644 --- a/docs/notebooks/openvino-tokenizers-with-output.rst +++ b/docs/notebooks/openvino-tokenizers-with-output.rst @@ -548,6 +548,6 @@ Links Types `__ - `OpenVINO.GenAI repository with the C++ example of OpenVINO Tokenizers - usage `__ + usage `__ - `HuggingFace Tokenizers Comparison Table `__ diff --git a/docs/notebooks/whisper-asr-genai-with-output.rst b/docs/notebooks/whisper-asr-genai-with-output.rst index fa7139e528cb2a..bef25aa3eb8c6c 100644 --- a/docs/notebooks/whisper-asr-genai-with-output.rst +++ b/docs/notebooks/whisper-asr-genai-with-output.rst @@ -30,7 +30,7 @@ converts the models to OpenVINO™ IR format. To simplify the user experience, we will use `OpenVINO Generate API `__ for `Whisper automatic speech recognition -scenarios `__. +scenarios `__. Installation Instructions ~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -406,7 +406,7 @@ Run inference OpenVINO model with WhisperPipeline To simplify user experience we will use `OpenVINO Generate -API `__. +API `__. Firstly we will create pipeline with ``WhisperPipeline``. You can construct it straight away from the folder with the converted model. It will automatically load the ``model``, ``tokenizer``, ``detokenizer`` diff --git a/docs/notebooks/whisper-subtitles-generation-with-output.rst b/docs/notebooks/whisper-subtitles-generation-with-output.rst index a2764b4622bf67..02bca63007b4f5 100644 --- a/docs/notebooks/whisper-subtitles-generation-with-output.rst +++ b/docs/notebooks/whisper-subtitles-generation-with-output.rst @@ -21,7 +21,7 @@ GitHub `repository `__. In this notebook, we will use Whisper model with `OpenVINO Generate API `__ for `Whisper automatic speech recognition -scenarios `__ +scenarios `__ to generate subtitles in a sample video. Additionally, we will use `NNCF `__ improving model performance by INT8 quantization. Notebook contains the following steps: @@ -228,7 +228,7 @@ Whisper model. whisper_pipeline.png To simplify user experience we will use `OpenVINO Generate -API `__. +API `__. Firstly we will create pipeline with ``WhisperPipeline``. You can construct it straight away from the folder with the converted model. It will automatically load the ``model``, ``tokenizer``, ``detokenizer``