Skip to content

Commit 9ac38f0

Browse files
authored
Update VLM example code in README (#1466)
Add `pipe.start_chat()` to VLM example. Without this, inference with several models results in empty outputs. This can be removed if this will be the default for VLM models, but at the moment, the most basic example should work with supported models. Also changed printing the VLMDecodedResults to getting the generated text and printing that (see comment from Ilya).
1 parent 09a5426 commit 9ac38f0

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,13 +133,15 @@ from PIL import Image
133133

134134
# Choose GPU instead of CPU in the line below to run the model on Intel integrated or discrete GPU
135135
pipe = ov_genai.VLMPipeline("./InternVL2-1B", "CPU")
136+
pipe.start_chat()
136137

137138
image = Image.open("dog.jpg")
138139
image_data = np.array(image.getdata()).reshape(1, image.size[1], image.size[0], 3).astype(np.uint8)
139140
image_data = ov.Tensor(image_data)
140141

141142
prompt = "Can you describe the image?"
142-
print(pipe.generate(prompt, image=image_data, max_new_tokens=100))
143+
result = pipe.generate(prompt, image=image_data, max_new_tokens=100)
144+
print(result.texts[0])
143145
```
144146

145147
### Run generation using VLMPipeline in C++

0 commit comments

Comments
 (0)