Covariance rather than variance in Fixed "Noise" Likelihood #2117
Replies: 3 comments 13 replies
-
On further reflection, I see haw to do this using the model rather than the likelihood: index the segments, and add a model component that has a Kronecker dependence on segment indices. I do still think that there would be conceptual value in adding this functionality to the Likelihood part of the API, because it generalizes the notion of experimental uncertainty, currently only capable of representing uncorrelated measurement noise, to a broader class of experimental uncertainty that certainly is relevant to real experimental situations -- degrees of systematic case-by-case irreproducibility. |
Beta Was this translation helpful? Give feedback.
-
After our discussion above, I think what you're looking for is FixedNoiseGaussianLikelihood, which is a heteroskedastic likelihood for training inputs. |
Beta Was this translation helpful? Give feedback.
-
I've run into a problem implementing what I want using the model interface. What I would like, to represent procedure uncertainty, is very like batch independent multi-output GP, with each task representing a single experiment. However, I don't want the parameters controlling the kernel/mean in each task (i.e. producing the task's diagonal covariance block) to be independent of each other. Instead, I would like, say, outputscale[k] = f(k,p), where k is the task number, and f(k,p) is a user-supplied parametrized function of a parameter vector p. f() represents a dependency on the experimental conditions corresponding to task k. I would like to be able to fit p. And to do this for the other block kernel parameters besides outputscale as well. In other words, the model has no correlation between blocks, but instead of having N_task * N_k parameters to fit, it only has to optimize the sum of the log-likelihoods with respect to the N_p parameters in the vector p. I'm unsure how to accomplish this. Perhaps a new constraint class? |
Beta Was this translation helpful? Give feedback.
-
Hello.
I have a use case in which it would be convenient to represent experimental procedure uncertainty -- not noise, really, but case-to-case variability -- in the likelihood, by means of a full covariance matrix (rather than a variance vector) that applies to a certain subset of data.
In other words, data would come a segments of N numbers, each with its own, fixed NxN covariance matrix, assigned to through the likelihood function. The various segments are then connected through a more general GP model that joins them through the parameter space in which they are all embedded, and regression, prediction, and all that ensues.
There would be value in having these data covariances contain joint adjustable parameters for training, as well.
It's not clear to me that the Multitask likelihood accomplishes what I want, exactly, as it seems to want to fit some kind of joint likelihood covariance term to all the data, if I understand the documentation correctly. But I'd be happy to be corrected.
Does some such "covariance likelihood" exist as an obscure option in the code, or can it be forced into existence by doing something weirdly clever, or would it require a code change?
Thanks,
Carlo
Beta Was this translation helpful? Give feedback.
All reactions