You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is also just something I've noticed and creating an issue to take a better look later.
With CGG data, I noticed applying regularization to the betas didn't seem to change anything about the model parameters inferred (SUPER old results here).
I figured it could just be bad luck with CGG, so tried with Tyler's RBD dataset once again (new prep) -- and found that the smallest of penalties seems to stop the model from fitting at all (experiment details here).
Without reg:
With reg:
NOTE: Also trained these models for a relatively short time and provided simple nonlinearities. Though I've noticed a consistent trend of regularization not behaving as I'd expect -- could just be user-error but want to take a better look later.
The text was updated successfully, but these errors were encountered:
This is also just something I've noticed and creating an issue to take a better look later.
With CGG data, I noticed applying regularization to the betas didn't seem to change anything about the model parameters inferred (SUPER old results here).
I figured it could just be bad luck with CGG, so tried with Tyler's RBD dataset once again (new prep) -- and found that the smallest of penalties seems to stop the model from fitting at all (experiment details here).
Without reg:
With reg:
NOTE: Also trained these models for a relatively short time and provided simple nonlinearities. Though I've noticed a consistent trend of regularization not behaving as I'd expect -- could just be user-error but want to take a better look later.
The text was updated successfully, but these errors were encountered: