-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
building notebooks into doc #1091
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1091 +/- ##
==========================================
+ Coverage 73.49% 73.50% +0.01%
==========================================
Files 134 134
Lines 14210 14210
==========================================
+ Hits 10443 10445 +2
+ Misses 3767 3765 -2 ☔ View full report in Codecov by Sentry. |
I think we should resume this effort. What about building the notebooks in the documentation, so they are kind of tested, as it is done in Gammapy. Right now there are outdated notebooks, which are likely not working with the current lstchain version. |
How does that work? How to set the path to the input files? (which should be added to the test data, right?) |
We could use sphinx-gallery And, yes, if we wanted to use example datasets we would need to add them to the test data. |
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
thanks for the update @vuillaut. I agree with your points. I'm updating some of the notebooks I showed in the school in a separate PR (docs are failing currently because of some of them) |
Hi @morcuended |
Tests are now passing :) |
Hi @vuillaut, can you clarify what is it to be done for notebooks which need more input that we can have in the test data? (and why it is needed). Thanks! |
for those, I would suggest (at least as a first step) either:
to get both we need parameterized notebooks using papermill. We would keep the nice static version for the doc and test with dummy data in the CI but without updating them for the docs. But as said, that needs more work and maintenance. |
Hi,
this is what we had in the past for some notebooks and the repo got too heavy, that is why we decided to remove output from notebooks. We can try to keep output low at the beginning, but it always gets bulkier and bulkier, so I would try to avoid this solution |
Note that many notebooks already have outputs, and the weight should not be that large if the notebooks are not updated regularly (a few MB?). But if they are, they stay in the git history and that becomes a problem, I see your point.
|
we have lived with this option up to now, I do not see it that bad, but I won't oppose to the other solution, that will also require more dedicated work |
that is only the current status because I did not go through all notebooks to identify / fix them
|
Then for the time being I would keep the output previously computed, so it is not changed in the docs deployment |
Hi |
Coming back to this issue. I would say that to have up-to-date working notebooks they should be produced in every doc build (using sphinx gallery or something like that). Otherwise, we will end up having again very outdated notebooks after some time. It requires more initial work but is easier to maintain and usable in the long run. |
Hi @morcuended |
Maybe adding the needed minimal files to the test dataset |
Hi @morcuended |
At least for the high-level analysis we now have a small DL3 crab dataset publicly available in zenodo that we can use for checking that the notebooks work and are up to date. |
@morcuended I propose to open another PR after this one to enable execution. |
TODO (after this PR)
nbsphinx_allow_errors = False
to make sure notebooks don't break in the future