Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mistral: improve ALM packing in LABs #1170

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

Ravenslofty
Copy link
Collaborator

@Ravenslofty Ravenslofty commented Jun 5, 2023

The current ALM packing code overestimates LAB TD congestion due to not accounting for sharing across ALMs.

Recently, I tried to resolve this by hashing the signal names, but this actually underestimated LAB TD congestion, because the TD-to-GOUT (ALM input) crossbar is not fully populated. The solution to this is to hash signal-and-ALM-input pairs, which still results in a distinct reduction in LAB use for attosoc, from 133 to 123 LABs.

I am a bit concerned about the impact on performance here; while it's still "fine" for small designs, perhaps @gatecat might have an alternative implementation based on the existing unique_input_count code minus some incrementally-updated factor for signal sharing.

This code also does not take into account LD usage, which would need some further thinking.

@gatecat
Copy link
Member

gatecat commented Jun 5, 2023

I wonder if it makes more sense to do this incrementally - replacing pool with a key->count dict, binds add and increase count as necessary, unbinds decrements count and remove if count is now zero

@Ravenslofty
Copy link
Collaborator Author

I figured you'd have a better way of doing this, but I'm glad the code got the general idea across.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants