KL expansion of the GP posterior #1625
Unanswered
ankushaggarwal
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am using a trained GP as an input to a nonlinear differential equation solver, which, for simplicity, results in a scalar output. I want to calculate the variance of the scalar output, for which I was thinking of using the Karhunen–Loève (KL) expansion of the GP combined with sigma points for the coefficients. Therefore, mathematically, I need to calculate the eigenfunctions of the posterior covariance function. Any ideas on whether this could be achieved?
In my understanding, a brute-force method would be to calculate the posterior covariance matrix of the trained GP at a fine enough grid of input and then calculate its eigenvectors. However, then I will have to regress the eigenvectors to convert them into a continuous function, that can be used in the differential equation solver. I was wondering if there is a better/more elegant way of doing this?
Beta Was this translation helpful? Give feedback.
All reactions