-
Notifications
You must be signed in to change notification settings - Fork 562
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GP model has positive log-likelihood after optimisation #1043
Comments
Hi @enushi A positive likelihood shouldn't be problematic as we are maximizing the likelihood.
which looks fine. After all, I think you are confused by the plot you are getting, which shows that the model's mean does simply is zero. Decrease the variance, e.g. to 0.05, and you will see totally different result. Is this helpful? |
Hi @MartinBubel, Thanks for the explanation! It makes more sense now. Actually I added the line However, I noticed that in both cases (i.e., whether I do
which outputs the following: |
Hi @enushi Thanks for following up. Now, I see that my previous answer was not that accurate. In fact, there seems nothing bad with your prior variance guess. However, it seems like the optimizer converges to a local optimum. Using a variance prior of 0.05 instead of 0.5 simply yields a better guess for the optimizer and thus results in a better fit. This is particularly surprising since the optimizer converges to a variance of 1, eventually, where you could argue that the initial guess of 0.5 is much closer than 0.05. However, with nonlinear optimization, it is always not that easy so that "closeness" might be misleading. Restarting the optimization comes with some "globalization", using different (random) initial guesses for the model parameters. I admit, this is not optimal user experience. Ultimately, regarding your question: Yes, this is expected behavior and not a bug, as per default, the LBFGS solver (a local solver) is selected. Does that help you or are there any questions remaining? Best regards Martin |
Hi @MartinBubel, Thanks for your answer, but I don't understand still the discrepancy between what the plot shows and the posterior mean and variance that I get, in the case when I don't use restart. best regards, Elio |
Hi @enushi Also, I am not sure if rbf.variance is the posterior or simply an updated prior guess. I am not familiar yet with how GPy is architectured and documentation is scarce. To my understanding, the GP should have posterior variance at or near 0 at the training locations (not the kernel). Also, is it possible that something has got mixed up in the screenshots in your last post? Maybe you will find more insight if you debug into Best regards |
Hi,
My GP model has positive log-likelihood after optimisation and furthermore I don't get any warning message that something could be going wrong. Here is the code to reproduce the problem:
The log-likelihood of the model is 189.98926713194757. And this is the resulting posterior plot:
Is this an expected behaviour of the GPy package or a possible bug?
The text was updated successfully, but these errors were encountered: