Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug with pmpaddrX if pmpgran > 4 #1876

Closed
marcfedorow opened this issue Dec 16, 2024 · 8 comments
Closed

Bug with pmpaddrX if pmpgran > 4 #1876

marcfedorow opened this issue Dec 16, 2024 · 8 comments

Comments

@marcfedorow
Copy link
Contributor

this->val = val & ((reg_t(1) << (MAX_PADDR_BITS - PMP_SHIFT)) - 1);

According to RISC-V priv spec, in the NAPOT mode pmpaddrX's log2(region_size)-3th bit is zero and the following bits are ones.

When storing pmpaddr (pmpaddr_csr_t::unlogged_write), spike doesn't clear the least significant bits.
When reading pmpaddr (pmpaddr_csr_t::read), spike ORs the ones but doesn't clear the log2(region_size)-3th bit either.

This causes the following bug:
Let pmpgran be 8
Let pmpcfg0 be 0
Let pmpaddr0 be 0b00..0111
Let pmpcfg0 be 0x11 (NA4)
pmpcfg0 is now 0x19 (NAPOT) due to pmpgran > 4
pmpaddr must be 0b00..0110 now, i.e. the protected range is [0b011000..0b100000), region size = 8 bytes
actual pmpaddr is 0b00..0111, i.e. the protected range is [0..0b1000000), region size = 64 bytes

marcfedorow added a commit to marcfedorow/riscv-isa-sim that referenced this issue Dec 16, 2024
@aswaterman
Copy link
Collaborator

The spec does not say that any particular bit must read as 0 in NAPOT mode. It only says that bits G-2:0 read as 1.

Not convinced there's a bug here.

@marcfedorow
Copy link
Contributor Author

Priv spec 1.13
3.7.11 Address matching
Table 19
See attached
Screenshot_20241216_122607_Firefox

@aswaterman
Copy link
Collaborator

That table is just explaining how to program the register to achieve a certain effect. It isn't stating that particular bit is always 0 for a given PMP granularity.

@marcfedorow
Copy link
Contributor Author

marcfedorow commented Dec 16, 2024

That is true.
This is why I clear bits of address that are lesser than pmpgran, not clearing any exact bit on read.

pmpgran basically means that hardware doesn't store bits that represent least significant bits of address (i.e. these bits are hardwired zero). It's reasonable for spike to not store them either.

What spec says is if NAPOT is X-byte range, then pmpaddr should be read as (LSB to MSB) log2(size/4) ones, then zero, then the bits of the address.
The address may contain ones in the position that is assumed to be a hardwired zero, and spike only ORes the ones but doesn't clear the bit after those ones. This causes a situation when the ones for the address are read as ones for the size of the region in spike, while these ones are actually hardwired zero in hardware.

@marcfedorow
Copy link
Contributor Author

Should I add any other clarifications or this is enough to merge #1877?

@aswaterman
Copy link
Collaborator

There is no bit that is "presumed 0". The lowest-numbered bit that is capable of containing 0 in NAPOT mode (bit 0 in your example) is also capable of containing 1, i.e., it is a writable bit, not a hardwired bit. Otherwise, it wouldn't be possible to represent a greater-than-8-byte region when the PMP granularity is 8 bytes. Since it is a writable bit, it is software's responsibility to program this bit to 0 if it wants it to be 0, not Spike's responsibility.

@marcfedorow
Copy link
Contributor Author

marcfedorow commented Dec 17, 2024

Thank you for your patience.
The spec says:

Although changing pmpcfg.A[1] affects the value read from pmpaddr, it does not affect
the underlying value stored in that register—in particular, pmpaddr[G-1] retains its original value when
pmpcfg.A is changed from NAPOT to TOR/OFF then back to NAPOT

So I'm closing this issue and the related pull request because the spike's behaviour is actually correct.

@aswaterman
Copy link
Collaborator

Thanks @marcfedorow, glad we got to the bottom of it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants