Skip to content

Commit ac72934

Browse files
committed
improve doc around supported tasks and accelertor options
1 parent 096d94b commit ac72934

File tree

1 file changed

+8
-2
lines changed

1 file changed

+8
-2
lines changed

optimum/intel/pipelines/pipeline_base.py

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -171,6 +171,12 @@ def pipeline(
171171
The task defining which pipeline will be returned. Currently accepted tasks are:
172172
173173
- `"text-generation"`: will return a [`TextGenerationPipeline`]:.
174+
- `"fill-mask"`: will return a [`FillMaskPipeline`].
175+
- `"question-answering"`: will return a [`QuestionAnsweringPipeline`].
176+
- `"image-classificatio"`: will return a [`ImageClassificationPipeline`].
177+
- `"text-classification"`: will return a [`TextClassificationPipeline`].
178+
- `"token-classification"`: will return a [`TokenClassificationPipeline`].
179+
- `"audio-classification"`: will return a [`AudioClassificationPipeline`].
174180
175181
model (`str` or [`PreTrainedModel`], *optional*):
176182
The model that will be used by the pipeline to make predictions. This can be a model identifier or an
@@ -185,8 +191,8 @@ def pipeline(
185191
is not specified or not a string, then the default tokenizer for `config` is loaded (if it is a string).
186192
However, if `config` is also not given or not a string, then the default tokenizer for the given `task`
187193
will be loaded.
188-
accelerator (`str`, *optional*, defaults to `"ipex"`):
189-
The optimization backends, choose from ["ipex", "inc", "openvino"].
194+
accelerator (`str`, *optional*):
195+
The optimization backends, choose from ["ipex"].
190196
use_fast (`bool`, *optional*, defaults to `True`):
191197
Whether or not to use a Fast tokenizer if possible (a [`PreTrainedTokenizerFast`]).
192198
torch_dtype (`str` or `torch.dtype`, *optional*):

0 commit comments

Comments
 (0)