Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix bounds and scales for some parameters in Isensee_JCB2018 #132

Merged
merged 7 commits into from
Nov 11, 2024

Conversation

dilpath
Copy link
Collaborator

@dilpath dilpath commented Sep 30, 2021

No description provided.

Copy link
Collaborator

@elbaraim elbaraim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @dilpath
Why were these changes included? The original bounds and scales for the modified parameters in this pull request were originally set to be in linear scale with the given boundaries.
See https://github.com/Data2Dynamics/d2d/blob/master/arFramework3/Examples/Isensee_JCB2018/Setup.m

EDIT: Therefore, unless this was specifically requested for some reason, I think there is nothing to fix here.

@dilpath
Copy link
Collaborator Author

dilpath commented Oct 5, 2021

Hi @dilpath Why were these changes included? The original bounds and scales for the modified parameters in this pull request were originally set to be in linear scale with the given boundaries. See https://github.com/Data2Dynamics/d2d/blob/master/arFramework3/Examples/Isensee_JCB2018/Setup.m

EDIT: Therefore, unless this was specifically requested for some reason, I think there is nothing to fix here.

The model doesn't appear to unscale the parameter, so estimating in [-5, 3] is incorrect, and instead should be [0.00001, 1000].

@elbaraim
Copy link
Collaborator

elbaraim commented Oct 5, 2021

The model doesn't appear to unscale the parameter, so estimating in [-5, 3] is incorrect, and instead should be [0.00001, 1000].

I see what you mean. However, the parameter in the d2d implementation is set to zero in a linear scale and set as fixed. Therefore I think that adjusting the bounds to the values that you propose is correct, however the scale shall be kept to lin and the nominalValue and estimate to zero. Otherwise probably while loading the problem in pyPESTO will try to convert the zero to logarithmic scale and probably fail (?).

@dilpath
Copy link
Collaborator Author

dilpath commented Oct 5, 2021

The model doesn't appear to unscale the parameter, so estimating in [-5, 3] is incorrect, and instead should be [0.00001, 1000].

I see what you mean. However, the parameter in the d2d implementation is set to zero in a linear scale and set as fixed. Therefore I think that adjusting the bounds to the values that you propose is correct, however the scale shall be kept to lin and the nominalValue and estimate to zero. Otherwise probably while loading the problem in pyPESTO will try to convert the zero to logarithmic scale and probably fail (?).

If an error occurs in AMICI/pyPESTO, it could be fixed I think, since zero can still be used on log-scale with NumPy:

>>> import numpy as np
>>> np.log(0.0)
-inf
>>> np.exp(np.log(0.0))
0.0

Inclusion of zero in the bounds would still be problematic.

@FFroehlich
Copy link
Collaborator

The model doesn't appear to unscale the parameter, so estimating in [-5, 3] is incorrect, and instead should be [0.00001, 1000].

I see what you mean. However, the parameter in the d2d implementation is set to zero in a linear scale and set as fixed. Therefore I think that adjusting the bounds to the values that you propose is correct, however the scale shall be kept to lin and the nominalValue and estimate to zero. Otherwise probably while loading the problem in pyPESTO will try to convert the zero to logarithmic scale and probably fail (?).

I believe this is an artifact from the d2d implementation where it seems like parameters can only be fixed on a linear scale (see for example reduced/regular version of Lucarelli)

@elbaraim
Copy link
Collaborator

I believe this is an artifact from the d2d implementation where it seems like parameters can only be fixed on a linear scale (see for example reduced/regular version of Lucarelli)

@FFroehlich would then fix the parameter value at zero + defining log10 scale work in the AMICI/pyPESTO import ? As far as I know, pyPESTO works with the parameters in the defined scale, therefore np.log10(0) could be problematic (probably for other tools, like d2d too?)
If so, I do not see anything against approving these changes.

@dilpath dilpath requested a review from FFroehlich October 16, 2021 13:01
@FFroehlich
Copy link
Collaborator

I believe this is an artifact from the d2d implementation where it seems like parameters can only be fixed on a linear scale (see for example reduced/regular version of Lucarelli)

@FFroehlich would then fix the parameter value at zero + defining log10 scale work in the AMICI/pyPESTO import ? As far as I know, pyPESTO works with the parameters in the defined scale, therefore np.log10(0) could be problematic (probably for other tools, like d2d too?) If so, I do not see anything against approving these changes.

Ah, thats a good point. I did not check but I can definitely see this being problematic for other tools (e.g., d2d) , even if it works in AMICI/pyPESTO. So I wouldn't recommend changing this.

@FFroehlich
Copy link
Collaborator

FFroehlich commented Oct 16, 2021

I believe this is an artifact from the d2d implementation where it seems like parameters can only be fixed on a linear scale (see for example reduced/regular version of Lucarelli)

@FFroehlich would then fix the parameter value at zero + defining log10 scale work in the AMICI/pyPESTO import ? As far as I know, pyPESTO works with the parameters in the defined scale, therefore np.log10(0) could be problematic (probably for other tools, like d2d too?) If so, I do not see anything against approving these changes.

Ah, thats a good point. I did not check but I can definitely see this being problematic for other tools (e.g., d2d) , even if it works in AMICI/pyPESTO. So I wouldn't recommend changing this.

I do however see the issue that this causes trouble when for example trying to run model selection where some parameters are fixed/unfixed. I would say the best way to address this is to add additional indicator variables (e.g., representing the matlab variables model_options.partial_import, model_options.different_KD, model_options.PDE_inhibition, model_options.AC_inhibition) that are on a linear scale with bounds [0,1] and then set the scale for the fixed parameters to log10 with nonzero value and multiplying all occurences in the model with the respective indicator variables.

@dilpath
Copy link
Collaborator Author

dilpath commented Nov 7, 2024

I do however see the issue that this causes trouble when for example trying to run model selection where some parameters are fixed/unfixed. I would say the best way to address this is to add additional indicator variables (e.g., representing the matlab variables model_options.partial_import, model_options.different_KD, model_options.PDE_inhibition, model_options.AC_inhibition) that are on a linear scale with bounds [0,1] and then set the scale for the fixed parameters to log10 with nonzero value and multiplying all occurences in the model with the respective indicator variables.

I agree, to support model selection, the indicator variable suggestion makes sense. I opted to avoid supporting the model selection problem directly here, and instead directly provide the selected model in the benchmark collection. The rejected models can still be used via the new optional parameters_Isensee_JCB2018_model_selection.tsv file. Now all parameters in the benchmark problem on log10 scale are estimated or non-zero.

@dilpath dilpath requested review from FFroehlich and m-philipps and removed request for FFroehlich November 7, 2024 15:25
@m-philipps
Copy link
Collaborator

Could you briefly describe the changes you made to the original parameters file? I see that some fixed parameters are now missing.

@dilpath
Copy link
Collaborator Author

dilpath commented Nov 8, 2024

Could you briefly describe the changes you made to the original parameters file? I see that some fixed parameters are now missing.

Briefly: the missing parameters were always "off" in the benchmark problem, so I removed them to resolve the issue discussed above.

The paper describes a model selection problem with four hypotheses. Each hypothesis involves adding a set of parameters to the model -- these sets of parameters are annotated now in the model_selection_group column of the parameters table. The paper shows a result where the model with the first hypothesis alone (H1) was accepted as the best by AIC.

I changed this benchmark problem to be the parameter estimation problem for the H1-only model. Actually, this was already the case, because the old parameters table had the H2/H3/H4 (log-scaled) parameters fixed to 0. The only change I made was moving the H2/H3/H4 parameters to a second table. This fixes the issue discussed above, by "removing" the log-scaled parameters that were fixed to 0.

If someone wants to optimize the H1-only model, that will be the benchmark problem. If they want to optimize one of the other models, they can use the second table too.

@FFroehlich
Copy link
Collaborator

Could you briefly describe the changes you made to the original parameters file? I see that some fixed parameters are now missing.

Briefly: the missing parameters were always "off" in the benchmark problem, so I removed them to resolve the issue discussed above.

The paper describes a model selection problem with four hypotheses. Each hypothesis involves adding a set of parameters to the model -- these sets of parameters are annotated now in the model_selection_group column of the parameters table. The paper shows a result where the model with the first hypothesis alone (H1) was accepted as the best by AIC.

I changed this benchmark problem to be the parameter estimation problem for the H1-only model. Actually, this was already the case, because the old parameters table had the H2/H3/H4 (log-scaled) parameters fixed to 0. The only change I made was moving the H2/H3/H4 parameters to a second table. This fixes the issue discussed above, by "removing" the log-scaled parameters that were fixed to 0.

If someone wants to optimize the H1-only model, that will be the benchmark problem. If they want to optimize one of the other models, they can use the second table too.

I think it would be great to have a markdown file in the repo that explains exactly this.

Copy link
Collaborator

@m-philipps m-philipps left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thank you!

@dilpath dilpath merged commit 23c10f8 into master Nov 11, 2024
6 checks passed
@dilpath dilpath deleted the fix_isensee branch November 11, 2024 13:04
@FFroehlich
Copy link
Collaborator

Amici regression tests for this problem are failing following this merge:
https://github.com/AMICI-dev/AMICI/actions/runs/11797774460/job/32862480540?pr=2584

FAILED test_petab_benchmark.py::test_nominal_parameters_llh[Isensee_JCB2018] - Failed: Computed llh -6.5484e+03 does not match reference -3.9494e+03. Absolute difference is 2.60e+03 (tol 1.00e-03) and relative difference is 6.58e-01 (tol 1.00e-03).

To me it looks like there were some unintended side effects with the changes introduced here.

@dilpath
Copy link
Collaborator Author

dilpath commented Nov 12, 2024

Amici regression tests for this problem are failing following this merge: https://github.com/AMICI-dev/AMICI/actions/runs/11797774460/job/32862480540?pr=2584

FAILED test_petab_benchmark.py::test_nominal_parameters_llh[Isensee_JCB2018] - Failed: Computed llh -6.5484e+03 does not match reference -3.9494e+03. Absolute difference is 2.60e+03 (tol 1.00e-03) and relative difference is 6.58e-01 (tol 1.00e-03).

To me it looks like there were some unintended side effects with the changes introduced here.

I'll have a look

edit: see #253

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants