Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Checkpointing is not compatible with .grad() or when an inputs parameter is passed to .backward(). Please use .backward() and do not pass its inputs argument. #26

Open
yang-xidian opened this issue Apr 25, 2024 · 0 comments

Comments

@yang-xidian
Copy link

hope this letter finds you well. I am one of the users of your [project/research], and I wanted to bring to your attention an issue I encountered while using your code.

In my application, I attempted to utilize PyTorch's checkpointing feature to reduce memory usage and optimize the training process of my model. However, when I tried to pass the inputs parameter to the .backward() method while performing backpropagation, I encountered a RuntimeError:

vbnet
Copy code
RuntimeError: Checkpointing is not compatible with .grad() or when an inputs parameter is passed to .backward(). Please use .backward() and do not pass its inputs argument.
I believe this issue may be related to the incompatibility of the checkpointing feature with the .grad() method or when passing the inputs parameter to the .backward() method simultaneously. While I did not encounter any problems when calling the .grad() method, passing the inputs parameter to the .backward() method resulted in this error.

I was wondering if you could provide some guidance or suggestions on how to address this issue. I am highly interested in your work, and I hope to fully leverage your code and apply it to my project.

Thank you very much for your time and assistance. I look forward to hearing from you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant