-
Notifications
You must be signed in to change notification settings - Fork 877
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate benchmarks and example models #2473
Conversation
Use the official example models for the benchmarks
Key changes and improvements in this merged version: 1. Used the experimental cell space features: - Switched to the more efficient `OrthogonalVonNeumannGrid` - Leveraged `CellAgent` and `FixedAgent` base classes - Used the cell's neighborhood property for movement 2. Implemented discrete event scheduling for grass regrowth: - Used the experimental `ABMSimulator` for event scheduling - Replaced the countdown-based grass regrowth with scheduled events 3. Modern Mesa practices: - Used `agents_by_type` for agent management - Employed `shuffle_do()` for efficient random activation - Proper initialization of the Model class with `super().__init__()` - Clear data collection setup with dedicated model reporters 4. Code organization: - Maintained separation between agents and model - Added detailed docstrings - Used property decorators for grass state management - Consistent style and naming conventions 5. Additional improvements: - Made grass optional while keeping full functionality - Improved type hints and property usage - More efficient agent selection and movement - Better encapsulation of agent behaviors
Performance benchmarks: |
The merged implementation includes several improvements: 1. Code Organization: - Separated into agents.py and model.py following Mesa best practices - Clear docstrings and comments throughout - Consistent code style 2. Modern Mesa Features: - Uses AgentSet's shuffle_do() for random activation - Proper initialization using super().__init__() - Direct access to model.agents 3. Improvements to the Boid Implementation: - Better vector normalization handling - Added tracking of average heading for statistics - More robust neighbor handling - Cleaner separation of the three flocking behaviors - Added parameter validation and documentation 4. Key Changes: - Simplified the step() method using AgentSet - Improved documentation and type hints - Added model statistics tracking - Made parameter names more descriptive - Better default parameters for stable flocking
Merged the two implementations with the following improvements and best practices: 1. Code Organization: - Separated model and agent code into distinct files - Added comprehensive docstrings following Google style - Improved code organization and readability 2. Model Implementation: - Used Mesa 3.0's automatic agent management via `model.agents` - Used `shuffle_do()` for random agent activation 3. Agent Implementation: - Simplified agent code while maintaining functionality - Improved method documentation - Added clear separation of responsibilities between methods 4. Latest Mesa Best Practices: - Proper model initialization using `super().__init__(seed=seed)` - Use of `model.agents` instead of a scheduler - Clear attribute definitions and typing - Consistent code style following Mesa conventions 5. Performance Considerations: - Efficient use of list operations - Minimal object creation during runtime - Direct access to model properties This implementation maintains all the core functionality while being more organized, better documented, and following current Mesa best practices. It uses only stable features and avoids experimental ones. The main changes from the original implementations: 1. Unified the different versions of the Gini coefficient calculation 2. Added proper docstrings throughout 3. Removed duplicate code 4. Added the optional run_model method from one version 5. Simplified some method implementations
Co-authored-by: Jan Kwakkel <j.h.kwakkel@tudelft.nl>
for more information, see https://pre-commit.ci
Ready for review. Note that wolf_sheep uses experimental features like a ABMSimulator and the Cell space, while the other models are basic models only using stable features. I added a lot of docstring in the process (maybe too much, let me know). Rendered docs: https://mesa--2473.org.readthedocs.build/2473/examples.html |
Performance benchmarks: |
Makes it consistent with naming of other example models
just a quick clarifying question: this will break app.py for wolf-sheep until #2470 is merged. So, what is your envisioned timeline on both? |
Performance benchmarks: |
Fair, didn't think about that yet. We can do a few things:
It depends I think if #2470 will be API breaking of not and when its ready. If it's non-breaking, it can go into 3.0.1. |
As indicated in #2470, it won't be API breaking. It is also complete accept for the startup problem. I just have to decide on how to address that. I might be able to put a monkey patch in place. Once #2291 is merged, we can then go back in an do it cleaner for 3.1. All of this would still be implementation changes, no API changes. |
I propose merging this as is (if there are no other review comments), and following up with the visualisation in the SolaraViz PR. Then that PR can directly demonstrate how an model or app.py needs to be updated to work with such a visualisation (if at all). We will tag 3.0.1 when both are merged, so our stable docs keep working the whole time. |
i'll try to take a look at the other examples later today. |
At first I thought we shouldn't be so lenient with semantic versioning, since #2470 actually changes the API and thus should be 3.1. But it think both changes of 2470 can also be considered as bug fixes. Removing the monkey patch is good since we then no longer change the underlying model. And supporting DEVS is something that should have also been possible in 3.0, so I am fine with releasing as 3.0.1. |
And It would be great to get it through this weekend. But after that let's be a bit more stringent with semantic versioning. No problem if we already reach something like 3.4 before the end of the year. |
Agree on both points! |
Just a clarifying question: when is something an API change? #2470 adds a keyword argument to |
Official spec says:
But breaking things is the thing the main thing you don't want to do. Considering:
We're fine doing it in a patch instead of an minor release. But yeah, something like a minor release a month, there's nothing wrong with that. |
To be a bit more specific in this example. Adding an additional keyword is changing the public API. But it's fully backwards compatible, so a x.Y.z. change and not a X.y.z. change (which would be breaking). Patch version changes (x.y.Z.) are when the public API stays the same but the internal mechanism is changed in a way to achieve the actually documented behavior. |
@@ -47,25 +54,49 @@ def __init__( | |||
self.cohere_factor = cohere | |||
self.separate_factor = separate | |||
self.match_factor = match | |||
self.neighbors = None | |||
|
|||
def step(self): | |||
"""Get the Boid's neighbors, compute the new vector, and move accordingly.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why these extensive changes in this step method?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mainly performance, a lot of the boid might not have any neighbors at some time, which is where the early exit might add a lot of performance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks fine to me
Will be a bit annoying depending on whether this one or #2470 is merged first given that they both change wolf-sheep
Thanks for the review! If you need any help with a rebase of #2470 let me know. |
it was not very difficult, but thanks for the offer! |
Performance benchmarks:
|
This PR integrates the benchmark models into the example models and runs the benchmarks on the example models.
The first commit moves over the benchmark scripts to run on the example models, the following commits merge the benchmark models into the example models one-by-one.