-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DAML模型 #3
Comments
你好! |
谢谢。我看默认是64,以前改过32,还是报错,所以没再改小了。我用的是1080Ti。另外,能再将num_fea取值为1,2,3的含义解释下吗?非常感谢。 |
DAML可以试试小数据集,比如Musical,Office等。
相关说明会更新到readme中。 |
感谢!batch_size改为24,基本就可以了。 |
16G的显卡,batchsize改为32才跑起来。。 |
楼主,方便请教一个问题嘛。python3 main.py test --pth_path="./checkpoints/THE_PTH_PATH" --model=DeepCoNN |
你好,因为跑不起来,所以用了三块显卡并行,但是保存模型的时候报错了, |
@FKCHAN 你好, DataParallel 包装的模型 不能直接使用save 来保存模型, 参考:https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-torch-nn-dataparallel-models |
请问DAML模型实现了吗?我在运行的时候,第一次循环就提示GPU内存溢出(错误应该发生在loss.backward()处,但我未找出原因),而其他2个模型都是可以正常运行的。
The text was updated successfully, but these errors were encountered: