We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
布局检测文件如下,用官方默认的:layoutlmv3_base_inference.txt
如何降低布局检测gpu的使用率呢?比如batch我该调整哪个调小会有用 (但好像一篇文档进来也不需要batch吧,我是这样拿结果的: single_page_res = layout_model(np.array(image)[:,:,::-1], ignore_catids=[])['layout_dets'])
或者改什么配置合适?能降低gpu使用率
比如我通过: 1,限制memory(memory远远没超过,32g的 v100) torch.cuda.set_per_process_memory_fraction(0.5, device=0) # 对于第一个GPU
2, 限制(主要是它吧),但是我推理时输入的文档只有一个吧,这个设置也没啥用吧? cfg.SOLVER.IMS_PER_BATCH = 1 # 设置每次处理的图像数量为1
The text was updated successfully, but these errors were encountered:
to fix your trouble check this solution click maybe this will solve your problem.
Sorry, something went wrong.
No branches or pull requests
布局检测文件如下,用官方默认的:layoutlmv3_base_inference.txt
如何降低布局检测gpu的使用率呢?比如batch我该调整哪个调小会有用
(但好像一篇文档进来也不需要batch吧,我是这样拿结果的:
single_page_res = layout_model(np.array(image)[:,:,::-1], ignore_catids=[])['layout_dets'])
或者改什么配置合适?能降低gpu使用率
比如我通过:
1,限制memory(memory远远没超过,32g的 v100)
torch.cuda.set_per_process_memory_fraction(0.5, device=0) # 对于第一个GPU
2, 限制(主要是它吧),但是我推理时输入的文档只有一个吧,这个设置也没啥用吧?
cfg.SOLVER.IMS_PER_BATCH = 1 # 设置每次处理的图像数量为1
The text was updated successfully, but these errors were encountered: