Replies: 1 comment
-
不建议使用onnxruntime-gpu版,除非输入图像的尺寸固定。 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
我想在GPU使用onnxruntime, 然后使用pip安装了onnxruntime-gpu==1.16.1,修改了TableStructureRec/wired_table_rec/utils.py的代码。
修改了utils.py下的OrtInferSession类,如下:
providers = ['CUDAExecutionProvider'] self.session = InferenceSession(str(model_path), sess_options=self.sess_opt, providers=providers)
之后运行demo后,发现推理速度变得更慢,且推理时显存会涨到18G。在CPU上运行正常。请问这个问题该怎么解决?
运行机器配置如下:
显卡:V100 32G
系统:ubuntu18.04
cpu:Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz(12核)
cuda:11.7
onnxruntime-gpu:1.16.1
Beta Was this translation helpful? Give feedback.
All reactions