We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
https://github.com/PaddlePaddle/PaddleSlim/tree/develop/example/auto_compression/ocr
该方案更换ICDAR2015数据集,采用预训练ResNet50模型(更改模型配置即可)可以成功运行,其精度基本不变,速度减少为1/4,获得Inference模型。此时的模型在转为ONNX时报错,缺少量化配置文件(calibration_table.txt),因此只能使用基于TRT的Inference模型推理,在相同环境下,与量化前的模型速度几乎相同。
转换命令:
!paddle2onnx --model_dir /home/aistudio/PaddleSlim/example/auto_compression/ocr/save_quant_ppocr_r50_det/ --model_filename inference.pdmodel --params_filename inference.pdiparams --save_file /home/aistudio/work/ppocr_r50_db_det_slim.onnx --opset_version 13 --enable_dev_version True --deploy_backend tensorrt --enable_onnx_checker True
报错信息,缺少 calibration_table.txt
无法通过量化模型起到对OCR加速的效果。
The text was updated successfully, but these errors were encountered:
https://github.com/PaddlePaddle/Paddle2ONNX/tree/model_zoo/hardwares/tensorrt 提到 : PaddleSlim 量化模型,生成三个文件,分别是模型文件,如 model.pdmodel 或 model,权重文件,如 model.pdiparams 或__params__,和 scale 保存文件,如out_scale.txt 但是没有 out_scale.txt 文件
Sorry, something went wrong.
@Jiang-Jia-Jun 大佬帮忙看下呢?
This issue is stale because it has been open for 30 days with no activity.
No branches or pull requests
https://github.com/PaddlePaddle/PaddleSlim/tree/develop/example/auto_compression/ocr
该方案更换ICDAR2015数据集,采用预训练ResNet50模型(更改模型配置即可)可以成功运行,其精度基本不变,速度减少为1/4,获得Inference模型。此时的模型在转为ONNX时报错,缺少量化配置文件(calibration_table.txt),因此只能使用基于TRT的Inference模型推理,在相同环境下,与量化前的模型速度几乎相同。
转换命令:
!paddle2onnx --model_dir /home/aistudio/PaddleSlim/example/auto_compression/ocr/save_quant_ppocr_r50_det/
--model_filename inference.pdmodel
--params_filename inference.pdiparams
--save_file /home/aistudio/work/ppocr_r50_db_det_slim.onnx
--opset_version 13
--enable_dev_version True
--deploy_backend tensorrt
--enable_onnx_checker True
报错信息,缺少 calibration_table.txt
无法通过量化模型起到对OCR加速的效果。
The text was updated successfully, but these errors were encountered: