-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No grad for regularization loss #101
Comments
I pushed this version of regularization loss to def regularization_loss(self):
"""L1 penalize single mutant effects, and pre-latent interaction
weights."""
penalty = self.beta_l1_coefficient * self.latent_layer.weight[
:, : self.input_size
].norm(1)
if self.interaction_l1_coefficient > 0.0:
for interaction_layer in self.layers[: self.latent_idx]:
penalty += self.interaction_l1_coefficient * torch.sum(
[getattr(self, interaction_layer).weight.norm(1)]
)
return penalty This version gives
when I run |
@matsen Hmm, I can't reproduce the float issue: >>> from torchdms.model import FullyConnected
>>> model = FullyConnected(10, [2], [None], [None], None, beta_l1_coefficient=1e-3)
>>> loss = model.regularization_loss()
>>> print(loss)
tensor([14.5608], grad_fn=<AddBackward0>) |
That's strange. Did you try dropping into the debugger as in my original report? |
Yes the issue surfaces in the debugger. (Pdb++) print(ppp)
0.0
(Pdb++) ppp.backward()
*** AttributeError: 'float' object has no attribute 'backward' |
Fascinating. And sorry if I sent you on a goose chase. How do you propose moving forward? |
I still don't understand the behavior, so no proposal yet. I'll keep poking! 👨🏭 |
I think that our implementation of regularization loss is broken!
Here's how it looks now:
The thing is,
penalty
is thus a float and we have no option for backprop!I can check this out by using
If we print qqq, it's a tensor, but ppp is a float.
The text was updated successfully, but these errors were encountered: