-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Minuit optimiser and update NRSource #40
Conversation
I know why the tests don't pass. Tomorrow I will change the objective function to make it work! |
Hey, that looks nice! About minuit:
On another note, I've opened #41 to discuss the broader scope of minimizers and their implementation |
Thanks for the input! I will take that into consideration! 😄 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding iminuit!
I've added a few comments on inference.py
where there are a few places where things can be simplified and cleaned up.
We have to see how to make the bestfit
arguments the same for different optimizers (tolerance specification and options such as use_hessian
) and make sure it returns the same output for different optimizers, but its great that we can now compare different optimizers and error estimators.
Do you know if the tutorial notebook also runs with iminuit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding minuit Christian! I hope the optimizer will now be more stable with the scaling removed -- let's see how it works.
In this pull request I have added the Minuit optimiser and updated the NRSource for SR1.
Further more as discussed in issue #36 I have taken out the scaling of the parameters in the optimiser since it seemed to give problems with the error computation.
The Minuit optimiser needs some polishing but from testing seems to work very well and the computation time penalty doesn't seem to be too great (roughly a factor 2) with respect to the tensorflow optimiser.
The main advantage of Minuit comes from the fact that bounds can be given to the parameters and this avoids the useless stepping into negative parameter values and nan-s from the objective function.
Possible improvements to this implementation are: