Skip to content

Commit a651292

Browse files
helena-intelAlexKoff88nikita-savelyevvilya-lavrenov
authored
Fix optimum-cli command for VLM example in README (#1348)
With the existing command users get an error: Channel size 4304 should be divisible by size of group 128. --------- Co-authored-by: Alexander Kozlov <alexander.kozlov@intel.com> Co-authored-by: Nikita Savelyev <nikita.savelyev@intel.com> Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
1 parent 2a52e86 commit a651292

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -107,12 +107,12 @@ For more examples check out our [Generative AI workflow](https://docs.openvino.a
107107
108108
### Converting and compressing the model from Hugging Face library
109109
110-
```sh
111-
#(Basic) download and convert to OpenVINO MiniCPM-V-2_6 model
112-
optimum-cli export openvino --model openbmb/MiniCPM-V-2_6 --trust-remote-code --weight-format fp16 MiniCPM-V-2_6
110+
To convert the [OpenGVLab/InternVL2-1B](https://huggingface.co/OpenGVLab/InternVL2-1B) model, `timm` and `einops` are required: `pip install timm einops`.
113111
114-
#(Recommended) Same as above but with compression: language model is compressed to int4, other model components are compressed to int8
115-
optimum-cli export openvino --model openbmb/MiniCPM-V-2_6 --trust-remote-code --weight-format int4 MiniCPM-V-2_6
112+
```sh
113+
# Download and convert the OpenGVLab/InternVL2-1B model to OpenVINO with int4 weight-compression for the language model
114+
# Other components are compressed to int8
115+
optimum-cli export openvino -m OpenGVLab/InternVL2-1B --trust-remote-code --weight-format int4 InternVL2-1B
116116
```
117117

118118
### Run generation using VLMPipeline API in Python
@@ -132,7 +132,7 @@ import openvino_genai as ov_genai
132132
from PIL import Image
133133

134134
# Choose GPU instead of CPU in the line below to run the model on Intel integrated or discrete GPU
135-
pipe = ov_genai.VLMPipeline("./MiniCPM-V-2_6/", "CPU")
135+
pipe = ov_genai.VLMPipeline("./InternVL2-1B", "CPU")
136136

137137
image = Image.open("dog.jpg")
138138
image_data = np.array(image.getdata()).reshape(1, image.size[1], image.size[0], 3).astype(np.uint8)

0 commit comments

Comments
 (0)