-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix byte-enable bug in TLM2/Axi library. #5
base: main
Are you sure you want to change the base?
Conversation
Looking at the history, this function used to do that masking/shifting in all cases; it was later updated to only do this for all zeros and all ones. This is converting between Bluespec's TLM2 protocol and an AXI protocol (possibly a particular version of AXI from at least 10 years ago) -- I don't recall enough about either protocol to know if it's correct to remove this code. I've sent the question to someone who worked on it, and maybe they recall. If we do remove this code, I would suggest removing the other lines that are now unused. It seems like the body of the function just becomes:
|
I heard from Don about this code: The original intention was that users could choose not to specify the byte enables and the code would compute it (based on the burst size and the address alignment). In the later TLM3, this was properly specified with Don does note that the code's calculation only works for bsize of 0 or 1 (8 bits or 16 bits) and is broken for large sizes. Is that the issue that you ran into? He notes that the
So one way to fix this code would be keep the "magic" calculation mode and just fix the calculation. (Particularly if we were concerned about backwards compatibility for any existing users. But I don't know if anyone is using this?) Don also thinks that, since the TLM2 doesn't have the tagged union, that removing the "magic" encoding is probably best. If you agree, I'm happy to accept your PR -- I'd update it to remove the now-unused code, and also we'd need to rebase (with recent changes to remove trailing whitespace). And maybe a comment (that this code use to interpret all-zeros and all-ones as a request to calculate the byte enable)? |
I agree removing the magic is the right idea. That's what I had to do in an
environment where I was using Bluespec transactors that got requests from
external sources that used byte enables and expected them to be honored.
…On Sat, Mar 20, 2021 at 1:09 PM Julie Schwartz ***@***.***> wrote:
I heard from Don about this code: The original intention was that users
could choose not to specify the byte enables and the code would compute it
(based on the burst size and the address alignment). In the later TLM3,
this was properly specified with Maybe-like type (TLMBEKind) with Specify
and Calculate constructors; but in TLM2, this is interpreted from all
zeros or all ones.
Don does note that the code's calculation only works for bsize of 0 or 1
(8 bits or 16 bits) and is broken for large sizes. Is that the issue that
you ran into? He notes that the mask computation for Calculate in TLM3
looks like this:
let shift = 1 << pack(tlm_descriptor.b_size);
let mask = ~(all_ones << shift);
So one way to fix this code would be keep the "magic" calculation mode and
just fix the calculation. (Particularly if we were concerned about
backwards compatibility for any existing users. But I don't know if anyone
is using this?)
Don also thinks that, since the TLM2 doesn't have the tagged union, that
removing the "magic" encoding is probably best. If you agree, I'm happy to
accept your PR -- I'd update it to remove the now-unused code, and also
we'd need to rebase (with recent changes to remove trailing whitespace).
And maybe a comment (that this code use to interpret all-zeros and all-ones
as a request to calculate the byte enable)?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#5 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAMUMA4BKSIJ7VCBADZIMTTET6IPANCNFSM4ZPS7YTQ>
.
|
It sounds like you're saying that the only way for it to work for you was to remove the logic, but I'm not convinced that's true: If your packet had all ones and that was a legal value to have, then the magic (if it's calculation is fixed) would calculate and return a value all ones. Thus, the magic should always honor your original packet, if it was well formed. Yes? The reason it didn't was because of a bug in its logic. (Or did you want to send meaningful packets with all zeros or misaligned non-zero enables?) I'm still ok with removing the magic logic, though. |
I notice that TLMDefines and TLM2Defines have a function That's all fine, but I notice that the calculation is a similar broken calculation to what we're removing from |
More comments from Don:
So the code shouldn't have been calculating wrong byte-enables. Your problem with the original code, then, is that your system was passing (a) requests with all ones but where the burst size is smaller than that many bytes or (b) requests with all zeros; and you wanted those byte-enables to be preserved. |
No description provided.