-
-
Notifications
You must be signed in to change notification settings - Fork 782
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generate release cycle chart and CSV #988
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, I'm personally not a huge fan of making building the devguide much more dependent upon ye olde not-so-cross-platform make
; I'd much prefer a Sphinx extension that would work reliably cross platform with any of the standard Sphinx invocations. It's also more reusable that way, if we ever want to use it on other repos.
That said, that's just my personal take; I don't mean to tell a Core Developer like you what do do 😛 And to be fair, back in the day I basically did this same architecture for generating a production version of our Spyder-Docs site with mutliversion, redirects, etc, before I knew anything about writing a Sphinx extension (and doing so would have been more difficult for that use case).
Windows is my primary platform, but I've never for the life of me had to write a batch file so I wouldn't know where to start there (I have a Windows build of make
for when I need it, but I rarely do—ironically, the one time I do is for building the Spyder-Docs site mentioned above in production mode with multiversion).
Co-authored-by: C.A.M. Gerlach <CAM.Gerlach@Gerlach.CAM>
50f3e30
to
b853e39
Compare
versions.rst
Outdated
|
||
.. mermaid:: include/release-cycle.mmd | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would put this above the two tables.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On a temporary branch for this I pointed out that the label |
Updated! Will check the other review comments a bit later, thanks all! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per our previous discussion, I've worked on a suggested revision: CAM-Gerlach@c2cbf6b factoring the dict preprocessing and the actual CSV writing into separate, modular and easily-composable steps, which IMO:
- is a cleaner design (and could be separated into smaller, more atomic functions)
- is a fair bit shorter/simpler
- avoids the code smell of opening two files, creating two CSV writers, having two write calls, etc.
- is more efficient (writing each CSV in one go and only having one file open for a shorter time)
- eliminates the redundant and potentially bug-magnet duplicate CSV_HEADER and listing of keys in the final dict
I also fixed the version sorting issue there.
In addition, I have a few comments/suggestions below, and a couple overall:
-
One thing that's missing here is a clear indication of the expected/actual cut-over point between bugfix and security-only releases. Could we add an additional column with this information, and indicate it as such on the chart? This is standard for most pages like this, and the Wikipedia page has it. Otherwise, it's not clearly indicated anywhere here when the cutover is planned/has occurred.Deferred to a followup - Could we maybe try to keep the JSON key names and order more in sync with the table? I don't really see a compelling reason to not just use the same keys and order, minus formatting (spaces, caps)
- Wouldn't it be more idiomatic to name the Python file with
_
s? Then it could be imported if needed. Otherwise, doesn't really make sense to have amain()
function and anif __name__ == "__main__"
block since it isn't possible to import it withoutimportlib
hackery. -
Maybe its just me, but I'd certainly prefer if this was a Sphinx extension integrated into the build as opposed to a standalone script invoked manually or via a non-cross-platform Makefile...
Co-authored-by: C.A.M. Gerlach <CAM.Gerlach@Gerlach.CAM>
Thanks, I've integrated that stuff 👍
Is it okay to deal with that separately after this PR, to reduce the scope? At least we don't have that information in the current tables.
The table does, but the chart doesn't. (Aside: it's a bit strange the Wikipedia table has "End of full support" before "End of security fixes".)
Can do! I think we should keep So like this? diff --git a/include/release-cycle.json b/include/release-cycle.json
--- a/include/release-cycle.json
+++ b/include/release-cycle.json
{
"3.12": {
"branch": "main",
"pep": 693,
"status": "feature",
- "release_manager": "Thomas Wouters",
- "release_date": "2023-10-02",
- "eol": "2028-10"
+ "first_release": "2023-10-02",
+ "end_of_life": "2028-10",
+ "release_manager": "Thomas Wouters"
},
Yep, renamed! Shall we keep the script in the root or move it to some subdirectory?
I slightly prefer But then again there's not much overhead in reading one file and writing two. What are others' thoughts? |
A script with a makefile target (and if possible windows script equivalent, but not a blocker) seems simplest and best to me. |
Unacceptable I say, unacceptable! 😆 Yeah, that seems quite reasonable to me, though its not really up to me either way, heh.
How so, sorry? Since it occurs between initial release and final security EoL, it would seem to me to make the most sense to put it between the two, unless there's something I'm missing...
Sure, that seems to make the most sense to me; in general, I don't have any strong preference what they are, just that they match if practicable. In particular, I realize I was unclear before—I didn't mean to imply that the JSON should be changed to match the names and order of the table, but rather just that one should match the other either way.
It seems cleaner to me to move it to an appropriate subdirectory (e.g.
Seems fair, so long as the script itself works on Windows and I can just execute it directly if needed. In that case, we of course need to run it in CI to verify the generated output matches is in sync with what is checked in, and that it works without errors (at least for any PRs/pushes touching the script, source or destination files, though since its likely very cheap to run at least on CI, perhaps simpler to just do it unconditionally in an existing workflow). |
Updated:
Re: CI I made a new workflow as the existing workflow doesn't pass on Windows. Test steps are:
This works for Linux but not Windows, I think because of line endings:
I also added @CAM-Gerlach Any Windows tips here? |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some comments on the new release cycle test workflow, otherwise seems LGTM
Co-authored-by: C.A.M. Gerlach <CAM.Gerlach@Gerlach.CAM>
Co-authored-by: C.A.M. Gerlach <CAM.Gerlach@Gerlach.CAM>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM now, thanks @hugovk !
Fixes python/docs-community#67.
Add a script to read in a JSON file of data about Python releases: PEP no, branch status, release manager and date, and EOL.
The script generates:
Two CSV files
A Mermaid Gantt chart file
The CSV files are used for the tables (#884) and the Mermaid .mmd file is used for the release cycle chart:
Rather than a Sphinx extension, this is a standalone script that can be run via
make version
orpython generate-release-cycle.py
. I've also taken advantage of standard Makefile logic so the CSV and Mermaid files are regenerated when the JSON file is modified. They're also regenerated when runningmake html
, and only if the JSON file has been modified.Demo
https://cpython-devguide--988.org.readthedocs.build/versions/#python-release-cycle
Questions
Q1: Is this Makefile method of generating the files acceptable? If so, I'll need a hand with the Windows
make.bat
(I don't have Windows).Q2: Do we want the Python script in a subdirectory?
TODO