-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Open
Labels
questionFurther information is requestedFurther information is requested
Description
Question
Hi everyone!
I’m trying to optimize inference speed for a Flair Sequence tagger (flair/ner-lang-large) model based on XLM-R using ONNX to improve CPU performance.
I followed the guide here and managed to export the model to ONNX succesfully. However, I'm not sure how to actually run inference in an ONNX environment (using onnxruntime) and get the same prediction as with Flair.
Does anyone have an example or code snippet showing how to perform inference with a flair model in ONNX? I also tried JIT but I hadn't speed improvements.
Any reference or working example would be greatly appreciated🙏
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
questionFurther information is requestedFurther information is requested