-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non blocking cache implementation #166
Non blocking cache implementation #166
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of things to note:
- There's no guarantee that a response from the L2 cache (i.e. a refill) will take precedence over a new request from the LS. To fix this, you might need to add a precedence rule between the L2 response port and the LSU request
- If the L2 sends a refill response for cycle N, you obviously need to block the LS from arbitrating. You've done that with the boolean
cache_refill_selected_
. But you can also do that but querying the pipeline "have you been appended.." This removes the need for the extra event to clear the new boolean.
core/DCache.hpp
Outdated
|
||
// To arbitrate between incoming request from LSU and Cache refills from BIU | ||
// bool incoming_cache_refill_ = false; | ||
MemoryAccessInfoPtr incoming_cache_refill_ = nullptr; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of "remembering" the incoming cache refill, why not have the MemoryAccessInfoPtr class indicate that it's a refill?
core/DCache.cpp
Outdated
const auto stage_id = static_cast<uint32_t>(PipelineStage::DATA_READ); | ||
const MemoryAccessInfoPtr & mem_access_info_ptr = cache_pipeline_[stage_id]; | ||
ILOG(mem_access_info_ptr << " in read stage"); | ||
if (incoming_cache_refill_ == mem_access_info_ptr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the MemoryAccessInfo keeps track of whether it's a refill or not, this code can change to:
if(mem_access_info_ptr->isRefill())
core/DCache.cpp
Outdated
if (!cache_refill_selected_) | ||
{ | ||
ILOG("Arbitration from refill " << memory_access_info_ptr); | ||
memory_access_info_ptr->setCacheState(MemoryAccessInfo::CacheState::MISS); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not really a miss, but a loss of arbitration. Does the LSU know to try again next cycle?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have replaced it with RELOAD as i think its also used in the l2 cache for a similar situation.
The LSU retries using the replay buffer.
@@ -61,8 +111,8 @@ namespace olympia | |||
|
|||
sparta::DataInPort<uint32_t> in_l2cache_ack_{&unit_port_set_, "in_l2cache_ack", 1}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add comment this port is receiving an ack from next level memory. If there's any possibility of connecting DCache to L3 or BIU directly, then this method name becomes confusing; consider changing method name if possibility exist.
sparta::DataInPort<MemoryAccessInfoPtr> in_l2cache_resp_{&unit_port_set_, | ||
"in_l2cache_resp", 1}; | ||
sparta::DataInPort<MemoryAccessInfoPtr> in_l2cache_resp_{&unit_port_set_, "in_l2cache_resp", | ||
1}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add comment as to what "resp" is - ack or fill data or something else.
core/DCache.cpp
Outdated
ILOG("Load miss inst to LMQ; block address:0x" << std::hex << block_addr); | ||
(*mshr_it)->setMemRequest(mem_access_info_ptr); | ||
mem_access_info_ptr->setCacheState(MemoryAccessInfo::CacheState::MISS); | ||
out_lsu_lookup_req_.send(mem_access_info_ptr); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move out_lsu_lookup_req_.send(...) out of both if statements. Have only one call to out port. This greatly simplifies debug.
Remove the return statements. Also simplifies debug.
Change second "if" to "else if"
Add "else" to enqueue load.
All in the name of debugging and readability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cache Documentation extremely helpful. If you have time, would you consider adding diagram or flow chart of communication expectation between Lsu <-> cache <-> L2/L3/Biu?
Thank you for improving the data cache!
cfb23f0
to
e00c124
Compare
5480862
to
a4cbdf4
Compare
a4cbdf4
to
f475c00
Compare
@klingaard @jhsiaovt The PR is ready to review, I have added the test case to validate the arbitration logic |
core/DCache.hpp
Outdated
|
||
sparta::utils::ValidValue<MemoryAccessInfoPtr> l2_mem_access_info_; | ||
sparta::utils::ValidValue<MemoryAccessInfoPtr> lsu_mem_access_info_; | ||
void arbitrate_l2_lsu_req_() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you change the function name to match the coding style?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My bad I should have fixed it before, I will create a PR to add this linting style to the github actions.
@h0lyalg0rithm are you good to merge this? I need to account for your port changes in my VLSU PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the changes and adding a test suite!
@aarongchan Yes I am ready to merge this |
The implementation is based on the previous work by Vineeth.
However I made change to use a single pipeline and reduce the number of events generated inside the Dache.
I has also removed the col-easing of the mshr requests (based on the block address) instead each request generates an mshr and the same instruction is then sent to next cache for lookup in case of a miss.
Micro architecture details https://docs.google.com/document/d/1HLlCkfZUtt6BafgVypS5pwS90zo4XOrGFOR1KIzYHLw/edit?usp=sharing