-
Notifications
You must be signed in to change notification settings - Fork 145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add carma to gp modeling [WIP] #767
base: main
Are you sure you want to change the base?
Conversation
Hello @dhuppenkothen! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:
Comment last updated at 2023-10-05 07:47:54 UTC |
Codecov Report
@@ Coverage Diff @@
## main #767 +/- ##
==========================================
- Coverage 97.29% 89.32% -7.98%
==========================================
Files 43 43
Lines 7918 8045 +127
==========================================
- Hits 7704 7186 -518
- Misses 214 859 +645
... and 13 files with indirect coverage changes 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
|
||
|
||
def get_kernel(kernel_type, kernel_params): | ||
def get_kernel(kernel_type, kernel_params, p=1, q=0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What range of values of p and q will be used by the user?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
p and q are user, defined, p will be in the range probably somewhere ~1-10, and similarly q in the range ~0-10, though larger numbers are theoretically possible (but would be very hard to model). An important constraint is that p>=q
parameter = yield prior_dict[i] | ||
for param in params_list: | ||
if param == "log_alpha": | ||
for j in range(p): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is log_alpha here a single parameter, or is it a list or array of parameters, if so what is its size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
log_alpha
is an array of size p
, log_beta
an array of size q
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method seems fine, but some suggestions:
- Here all the
log_alpha
's will have the same prior distribution. Is this intended? - Instead of checking for
params == "log_alpha"
at each instance, we may want to just check if the parameter the user is giving is an array and do its part separately. For example usually the dictionary we insert to theget_prior
function is like
prior_dict = {
"log_alpha": tfpd.Uniform(low = jnp.log(1e-1), high = jnp.log(2e+2)),
"t0": tfpd.Uniform(low = 0.0 - 0.1, high = 1 + 0.1),
}
If we want log_alpha
, to be actually be an array parameter of size p, we can change the dictionary to
prior_dict = {
"log_alpha": [tfpd.Uniform(low = jnp.log(1e-1), high = jnp.log(2e+2)),p],
"t0": tfpd.Uniform(low = 0.0 - 0.1, high = 1 + 0.1),
}
and the corresponding modifications to the get_prior
function's prior_model
could be:
def prior_model():
prior_list = []
for i in params_list:
# New code added for array parameters
if isinstance(prior_dict[i], list):
for j in range(prior_dict[i][1]):
if instance(prior_dict[param][0], tfpd.Distribution):
parameter = yield Prior(prior_dict[param][0], name = param + str(j))
elif isinstance(prior_dict[param][0], Prior):
parameter = yield prior_dict[param]
else:
raise ValueError("Prior should be a tfpd distribution or a jaxns prior.")
prior_list.append(parameter)
# Old code
elif isinstance(prior_dict[i], tfpd.Distribution):
parameter = yield Prior(prior_dict[i], name=i)
prior_list.append(parameter)
elif isinstance(prior_dict[i], Prior):
parameter = yield prior_dict[i]
prior_list.append(parameter)
else:
raise ValueError("Prior should be a tfpd distribution or a jaxns prior.")
return tuple(prior_list)
Draft pull request to add the CARMA kernel to the gpmodeling class.
Also introduces some minor fixes and additions:
@Gaurav17Joshi If you have a minute, I would love for you to take a look at this. The CARMA kernel takes an array of parameters by default, and I've struggled with making this work, so if you have suggestions for improvement, those would be very welcome!