Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

布局检测和公式检测放在一块卡上会干扰,如何降低干扰 #772

Open
520jefferson opened this issue Oct 23, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@520jefferson
Copy link

520jefferson commented Oct 23, 2024

布局检测文件如下,用官方默认的:layoutlmv3_base_inference.txt

如何降低布局检测gpu的使用率呢?比如batch我该调整哪个调小会有用
(但好像一篇文档进来也不需要batch吧,我是这样拿结果的:
single_page_res = layout_model(np.array(image)[:,:,::-1], ignore_catids=[])['layout_dets'])

或者改什么配置合适?能降低gpu使用率

比如我通过:
1,限制memory(memory远远没超过,32g的 v100)
torch.cuda.set_per_process_memory_fraction(0.5, device=0) # 对于第一个GPU

2, 限制(主要是它吧),但是我推理时输入的文档只有一个吧,这个设置也没啥用吧?
cfg.SOLVER.IMS_PER_BATCH = 1 # 设置每次处理的图像数量为1

@520jefferson 520jefferson added the enhancement New feature or request label Oct 23, 2024
@v3nus-py
Copy link

to fix your trouble check this solution click
maybe this will solve your problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants