You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I found that with same hyper-parameters but different num_core_per_host (num_core_per_host=1 for single-gpu and num_core_per_host=6 for multi-gpu), global_step/sec of multi-gpu is slightly fewer than that of single-gpu. num_core_per_host=6:
Hi,
I found that with same hyper-parameters but different
num_core_per_host
(num_core_per_host=1
for single-gpu andnum_core_per_host=6
for multi-gpu),global_step/sec
of multi-gpu is slightly fewer than that of single-gpu.num_core_per_host=6:
num_core_per_host=1:
Is this phenomenon reasonable and why?
System Information:
cuda V10.0.130
cudnn 7.4.1
nccl 2.6.4
tensorflow-gpu 1.13.1 (from pip in conda virtual environment)
Best Regards
The text was updated successfully, but these errors were encountered: