Evaluation/Prediction Performance enhancements and ONNX support
ThilinaRajapakse
released this
24 Sep 09:52
·
849 commits
to master
since this release
Mixed Precision Support for evaluation and prediction
Mixed precision (fp16) inference is now supported for evaluation and prediction in the following models:
- ClassificationModel
- ConvAI
- MultiModalClassificationModel
- NERModel
- QuestionAnsweringModel
- Seq2Seq
- T5Model
You can disable fp16 by setting fp16 = False
in the model_args
.
Multi-GPU support for evaluation and prediction
Set the number of GPUs with n_gpu
. in model_args
Currently supported in the following models:
- ClassificationModel
- ConvAI
- MultiModalClassificationModel
- NERModel
- QuestionAnsweringModel
- Seq2Seq
- T5Model
Native ONNX support for Classification and NER tasks (Beta)
Please note that ONNX support is still experimental.
See docs for details.