-
Notifications
You must be signed in to change notification settings - Fork 879
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Minor edits to benchmarking code #1985
Conversation
quaquel
commented
Jan 21, 2024
•
edited
Loading
edited
- fix for < python 3.12
- Insert the current repo of mesa into sys.path to ensure you are using the correct version
- pep8 name changes of files
- added code to run each benchmark in isolation (useful for profiling and line profiling, used it extensively already for Proposal: Formal neighborhood definition for grids #1900 (comment))
for more information, see https://pre-commit.ci
Let's see if the CI is working: /rerun-benchmarks |
Performance benchmarks:
|
Let me know when this is ready for review |
commit c0de4a1 Author: Ewout ter Hoeven <E.M.terHoeven@student.tudelft.nl> Date: Sun Jan 21 14:43:08 2024 +0100 Add CI workflow for performance benchmarks (projectmesa#1983) This commit introduces a new GitHub Actions workflow for performance benchmarking. The workflow is triggered on `pull_request_target` events and can be triggered with a "/rerun-benchmarks" comment in the PR. It should be compatible with PRs from both forks and the main repository. It includes steps for setting up the environment, checking out the PR and main branches, installing dependencies, and running benchmarks on both branches. The results are then compared, encoded, and posted as a comment on the PR.
This reverts commit 9485087.
It should be alright now. Please squash and merge this because I made a bit of a mess of the commit history. |
Could you update the PR message with a bit of motivation for the changes made? /rerun-benchmarks |
I already did so, or do you want to know more? But if so, what? |
Performance benchmarks:
|
Performance benchmarks:
|
LGTM, thanks for this effort. @rht or @Corvince could one of you do the final review and squash merge if ok? Did you also observe the Schelling being faster on your system? Or is that an anomaly? In the long run we should try to keep these models in sync with ABMFrameworksComparison. Do we want to make further improvements on these models in the short run? |
The sync with Agents.jl devs needs to happen soon, if possible. I think they would appreciate to know if we are now monitoring the performance as well. |
Haven't tested, but the finding is probably not a blocker to this PR being merged. |
I also noticed that, in particular, WolfSheep is not a good Python implementation. There is a lot of unnecessary code duplication going on. I have the impression that some implementations were done to mimic NetLogo's version. Which is fine for a tutorial for people coming from NetLogo. However, I would argue that the benchmark (and most examples) should reflect the Pythonic and Mesa way of doing things. I am happy to push a WolfSheep reimplementation if desired (But also see https://github.com/quaquel/mesa/blob/cell-space/benchmarks/WolfSheep/wolf_sheep.py, for the direction in which I would change it. |
Sounds good, would you like to do both (pushing to ABMFrameworksComparison and speeding up WolfSheep yourself? |
Yes, I'm fine with doing both (although it is more about cleaning up wolfsheep code; I have no idea about performance, but it is probably small). |
I think the first step is to simplify the WolfSheep at https://github.com/projectmesa/mesa-examples, then next all the other WolfSheep's (in Mesa's benchmark and Agents.jl's benchmark) should be updated to be almost the same as this reference implementation. |