Skip to content

Commit c18b399

Browse files
committed
Add a plugin for variable support in Markdown
Signed-off-by: mhelf-intel <monika.helfer@intel.com>
1 parent 37726a4 commit c18b399

File tree

3 files changed

+9
-5
lines changed

3 files changed

+9
-5
lines changed

docs/getting_started/quickstart/quickstart.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ Follow these steps to run the vLLM server or launch benchmarks on Gaudi using Do
8181
```bash
8282
MODEL="Qwen/Qwen2.5-14B-Instruct" \
8383
HF_TOKEN="<your huggingface token>" \
84-
DOCKER_IMAGE="vault.habana.ai/gaudi-docker/|Version|/ubuntu24.04/habanalabs/vllm-installer-|PT_VERSION|:latest"
84+
DOCKER_IMAGE="vault.habana.ai/gaudi-docker/|Version|/ubuntu24.04/habanalabs/vllm-installer-{{ PT_VERSION }}:latest"
8585
```
8686

8787
5. Run the vLLM server using Docker Compose.

docs/getting_started/quickstart/quickstart_configuration.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Set the preferred variable when running the vLLM server using Docker Compose, as
3535
```bash
3636
MODEL="Qwen/Qwen2.5-14B-Instruct" \
3737
HF_TOKEN="<your huggingface token>" \
38-
DOCKER_IMAGE="vault.habana.ai/gaudi-docker/|Version|/ubuntu24.04/habanalabs/vllm-installer-|PT_VERSION|:latest" \
38+
DOCKER_IMAGE="vault.habana.ai/gaudi-docker/|Version|/ubuntu24.04/habanalabs/vllm-installer-{{ PT_VERSION }}:latest" \
3939
TENSOR_PARALLEL_SIZE=1 \
4040
MAX_MODEL_LEN=2048 \
4141
docker compose up
@@ -59,7 +59,7 @@ Set the preferred variable when running the vLLM server using Docker Compose, as
5959
```bash
6060
MODEL="Qwen/Qwen2.5-14B-Instruct" \
6161
HF_TOKEN="<your huggingface token>" \
62-
DOCKER_IMAGE="vault.habana.ai/gaudi-docker/|Version|/ubuntu24.04/habanalabs/vllm-installer-|PT_VERSION|:latest" \
62+
DOCKER_IMAGE="vault.habana.ai/gaudi-docker/|Version|/ubuntu24.04/habanalabs/vllm-installer-{{ PT_VERSION }}:latest" \
6363
INPUT_TOK=128 \
6464
OUTPUT_TOK=128 \
6565
CON_REQ=16 \
@@ -76,7 +76,7 @@ This configuration allows you to launch the vLLM server and benchmark together.
7676
```bash
7777
MODEL="Qwen/Qwen2.5-14B-Instruct" \
7878
HF_TOKEN="<your huggingface token>" \
79-
DOCKER_IMAGE="vault.habana.ai/gaudi-docker/|Version|/ubuntu24.04/habanalabs/vllm-installer-|PT_VERSION|:latest" \
79+
DOCKER_IMAGE="vault.habana.ai/gaudi-docker/|Version|/ubuntu24.04/habanalabs/vllm-installer-{{ PT_VERSION }}:latest" \
8080
TENSOR_PARALLEL_SIZE=1 \
8181
MAX_MODEL_LEN=2048 \
8282
INPUT_TOK=128 \

mkdocs.yaml

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,7 @@ plugins:
5151
- search
5252
- autorefs
5353
- awesome-nav
54+
- macros
5455
# For API reference generation
5556
- api-autonav:
5657
modules: ["vllm_gaudi"]
@@ -121,4 +122,7 @@ extra_javascript:
121122
# Makes the url format end in .html rather than act as a dir
122123
# So index.md generates as index.html and is available under URL /index.html
123124
# https://www.mkdocs.org/user-guide/configuration/#use_directory_urls
124-
use_directory_urls: false
125+
use_directory_urls: false
126+
127+
extra:
128+
PT_VERSION: "2.7.1"

0 commit comments

Comments
 (0)