You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We found that the experimental settings of the MovingMNIST benchmark are not standardized. Many methods use an infinite training set (generated on the fly). To establish a unified evaluation standard, we chose to use the publicly available data with the highest utilization/consensus: mnist_test_seq.npy.
Question 2
Taking MovingMNIST as an example, please remove the settings related to multi-GPU communication from the training command:
If there are still issues, it could be caused by a version mismatch of pytorch_lightning. Please make sure that the version of pytorch_lightning is 1.6.4 specified in README:
您好,非常感谢您出色的工作!
我目前有2个问题:
问题1:
我发现论文中MovingMINST的实验设置和视频预测系列文章中的设置有些不同。
论文中,是总数是10000,其中train是8100,val是900,test是1000,都来自mnist_test_seq.npy。
在许多其他文章中(比如baselines中的PredRNN、PhyDNet),是训练10000,测试10000,训练来自train-images-idx3-ubyte.gz,测试来自mnist_test_seq.npy。
请问论文中这样的不同设置是有什么原因吗?
问题2:
我只有1个gpu,想rerun一下Earthformer,该如何设置呢?如果按照原始的代码,会遇到ApexDDPStrategy报错的问题。
谢谢!
The text was updated successfully, but these errors were encountered: