Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Non blocking cache implementation #166

Merged

Conversation

h0lyalg0rithm
Copy link
Collaborator

The implementation is based on the previous work by Vineeth.
However I made change to use a single pipeline and reduce the number of events generated inside the Dache.
I has also removed the col-easing of the mshr requests (based on the block address) instead each request generates an mshr and the same instruction is then sent to next cache for lookup in case of a miss.

Micro architecture details https://docs.google.com/document/d/1HLlCkfZUtt6BafgVypS5pwS90zo4XOrGFOR1KIzYHLw/edit?usp=sharing

@h0lyalg0rithm h0lyalg0rithm requested a review from klingaard April 22, 2024 16:32
@h0lyalg0rithm h0lyalg0rithm self-assigned this Apr 22, 2024
Copy link
Collaborator

@klingaard klingaard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple of things to note:

  • There's no guarantee that a response from the L2 cache (i.e. a refill) will take precedence over a new request from the LS. To fix this, you might need to add a precedence rule between the L2 response port and the LSU request
  • If the L2 sends a refill response for cycle N, you obviously need to block the LS from arbitrating. You've done that with the boolean cache_refill_selected_. But you can also do that but querying the pipeline "have you been appended.." This removes the need for the extra event to clear the new boolean.

core/DCache.hpp Outdated

// To arbitrate between incoming request from LSU and Cache refills from BIU
// bool incoming_cache_refill_ = false;
MemoryAccessInfoPtr incoming_cache_refill_ = nullptr;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of "remembering" the incoming cache refill, why not have the MemoryAccessInfoPtr class indicate that it's a refill?

core/DCache.cpp Outdated
const auto stage_id = static_cast<uint32_t>(PipelineStage::DATA_READ);
const MemoryAccessInfoPtr & mem_access_info_ptr = cache_pipeline_[stage_id];
ILOG(mem_access_info_ptr << " in read stage");
if (incoming_cache_refill_ == mem_access_info_ptr)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the MemoryAccessInfo keeps track of whether it's a refill or not, this code can change to:

if(mem_access_info_ptr->isRefill())

core/DCache.cpp Outdated
if (!cache_refill_selected_)
{
ILOG("Arbitration from refill " << memory_access_info_ptr);
memory_access_info_ptr->setCacheState(MemoryAccessInfo::CacheState::MISS);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not really a miss, but a loss of arbitration. Does the LSU know to try again next cycle?

Copy link
Collaborator Author

@h0lyalg0rithm h0lyalg0rithm May 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have replaced it with RELOAD as i think its also used in the l2 cache for a similar situation.
The LSU retries using the replay buffer.

@@ -61,8 +111,8 @@ namespace olympia

sparta::DataInPort<uint32_t> in_l2cache_ack_{&unit_port_set_, "in_l2cache_ack", 1};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add comment this port is receiving an ack from next level memory. If there's any possibility of connecting DCache to L3 or BIU directly, then this method name becomes confusing; consider changing method name if possibility exist.

sparta::DataInPort<MemoryAccessInfoPtr> in_l2cache_resp_{&unit_port_set_,
"in_l2cache_resp", 1};
sparta::DataInPort<MemoryAccessInfoPtr> in_l2cache_resp_{&unit_port_set_, "in_l2cache_resp",
1};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add comment as to what "resp" is - ack or fill data or something else.

core/DCache.cpp Outdated
ILOG("Load miss inst to LMQ; block address:0x" << std::hex << block_addr);
(*mshr_it)->setMemRequest(mem_access_info_ptr);
mem_access_info_ptr->setCacheState(MemoryAccessInfo::CacheState::MISS);
out_lsu_lookup_req_.send(mem_access_info_ptr);
Copy link

@jhsiaovt jhsiaovt May 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move out_lsu_lookup_req_.send(...) out of both if statements. Have only one call to out port. This greatly simplifies debug.
Remove the return statements. Also simplifies debug.
Change second "if" to "else if"
Add "else" to enqueue load.
All in the name of debugging and readability.

Copy link

@jhsiaovt jhsiaovt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cache Documentation extremely helpful. If you have time, would you consider adding diagram or flow chart of communication expectation between Lsu <-> cache <-> L2/L3/Biu?

Thank you for improving the data cache!

@h0lyalg0rithm h0lyalg0rithm force-pushed the non_blocking_cache branch 2 times, most recently from 5480862 to a4cbdf4 Compare May 17, 2024 11:26
@h0lyalg0rithm
Copy link
Collaborator Author

@klingaard @jhsiaovt The PR is ready to review, I have added the test case to validate the arbitration logic

core/DCache.hpp Outdated

sparta::utils::ValidValue<MemoryAccessInfoPtr> l2_mem_access_info_;
sparta::utils::ValidValue<MemoryAccessInfoPtr> lsu_mem_access_info_;
void arbitrate_l2_lsu_req_()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you change the function name to match the coding style?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My bad I should have fixed it before, I will create a PR to add this linting style to the github actions.

@aarongchan
Copy link
Collaborator

@h0lyalg0rithm are you good to merge this? I need to account for your port changes in my VLSU PR.

Copy link

@jhsiaovt jhsiaovt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the changes and adding a test suite!

@h0lyalg0rithm
Copy link
Collaborator Author

@h0lyalg0rithm are you good to merge this? I need to account for your port changes in my VLSU PR.

@aarongchan Yes I am ready to merge this

@h0lyalg0rithm h0lyalg0rithm merged commit 2f6020c into riscv-software-src:master Aug 2, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants