-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trouble propagating a custom gradient through FlowNet2 model #6
Comments
Hi, I encounter the same issue too, and I fixed it by using this, hope it can help you! |
Hi @PK15946 , thanks for the information. Do you want to make a pull request? |
@MatthewInkawhich Hi, have you fixed this problem? |
@PK15946 Hi~, thanks for your information, but I can not open the link you provided, so did you remember how to fix this problem? |
@liuqk3 Hi, no I did not end up using this repo. The code that I was trying to run worked on https://github.com/NVIDIA/flownet2-pytorch. This repo is based on NVIDIA's implementation anyway. |
@MatthewInkawhich I figured out the reason. There is something wrong in the file |
Did decreasing learning rate help? |
hi,Can you post this link again? Or your solution? Thank you very much! |
I am trying to back-propagate a custom gradient tensor through the FlowNet2 model. I know that this is possible in PyTorch using the following methodology:
I am trying to replicate this with FlowNet2. Here is the relevant code snippet:
However, when I attempt to run this, I encounter an error at line:
curr_flownet_out.backward(custom_grad)
Any ideas as to how I can successfully use PyTorch's autograd feature to propagate a custom gradient tensor through FlowNet2?
Thanks!
The text was updated successfully, but these errors were encountered: