Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update L1_data_cache.md, .sv #109

Closed
wants to merge 6 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
59 changes: 58 additions & 1 deletion Documentation/02_Complex_Module_Functions/03_L1_Data_Cache.md
Original file line number Diff line number Diff line change
@@ -1 +1,58 @@
(Note: This document is currently incomplete)
# L1 Data Cache Module Documentation

### Inputs
1. **`clk`**: System clock signal.
2. **`reset`**: Reset signal for initializing the cache.
3. **`request_address`**: A 32-bit input representing the memory address for read or write operations.
4. **`write_data`**: A 32-bit input representing the data to be written to the cache.
5. **`write_enable`**: An input signal to control write operations to the cache.

### Outputs
1. **`response_data`**: A 32-bit output representing the data read from the cache.

## Functionality
The `l1_data_cache` module is designed to provide a data caching mechanism that stores frequently accessed data to improve memory access speed. It operates based on the following functionality:

### On Posedge

The L1 Data Cache is a synchronous circuit, operating on a clock signal. On the positive edge (`posedge`) of the clock, the following operations are performed:

- **Initialization**: On reset (`reset` signal), the cache is initialized, and the LRU counters and dirty bits are set to initial values.

- **Write Operations**:
- When a write operation is enabled (`write_enable`), the module performs the following steps:
1. Determines the set index based on the request address.
2. Checks if the requested data is already in the cache.
3. If the data is present in the cache, it updates the cache with the new data, sets the dirty bit, and updates the LRU counter.
4. If the data is not in the cache, it may write back to memory (if the block being replaced is dirty), updates the tag, writes the data to the cache, sets the dirty bit, and updates the LRU counter.

- **Read Operations**:
- When a read operation is requested, the module performs the following steps:
1. Determines the set index based on the request address.
2. Searches the cache for the requested data.
3. If the data is found, it updates the response data and updates the LRU counter.
4. If the data is not in the cache (a cache miss), it can fetch the data from the main memory and potentially implement write allocate.


### Registers
The module utilizes several registers to maintain cache state and metadata:

- **`cache_data`**: A 3D array storing the data in the cache.
- **`lru_counter`**: A 2D array representing the LRU counters for cache sets and blocks.
- **`cache_tags`**: A 2D array storing the tag information for cache sets and blocks.
- **`dirty_bit`**: A 2D array indicating whether the data in each block is dirty (needs to be written back to memory).

### Combinational and Sequential Logic

- The L1 Data Cache uses both combinational and sequential logic. Combinational logic is used for address decoding, tag comparison, and hit/miss detection. Sequential logic is used to store cache lines, implement the LRU replacement policy, and track the write-back status of cache lines.


### Cache logic
The cache should use the following logic:

- **`Write allocate`**`: When the processor writes data to the cache, the cache allocates a block from main memory and stores the data in the cache, even if the block is not currently needed.
- **`Write back`**: When the processor writes data to the cache, the cache updates the data in the cache and sets the dirty bit to high. The cache writes the data back to main memory only when the block is evicted from the cache or when the cache is flushed.
- **`Lookthrough`**`: When the cache misses, the cache fetches the block from main memory and stores it in the cache. The processor does not stall during this time, but the instruction that caused the cache miss is delayed.
- **`LRU`**: The LRU policy evicts the least recently used block from the cache when the cache is full.


142 changes: 142 additions & 0 deletions rtl/L1_Data_Cache.sv
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
// Write HIT: Write-back: In a write-back cache, data is written to the cache and only later to the main memory when the cache line is replaced.
// Write MISS: Write allocate: when a write miss occurs, the cache line is loaded into the cache, and then the write operation is performed.
// Look-through: which means it checks the main memory for cache misses.
// LRU (Least Recently Used): which means the least recently used cache line is selected for replacement.

module L1_data_cache #(

parameter ADDRESS_LENGTH = 32,
parameter CACHE_SIZE = 16 * 1024,
parameter BLOCK_SIZE = ADDRESS_LENGTH,
parameter ASSOCIATIVITY = 4
)(

input clk, reset, write_enable,
input [31:0] request_address, write_data,
output reg [31:0] response_data
);
// Cache Configuration

localparam BLOCK_WIDTH = $clog2(BLOCK_SIZE);
localparam NO_OF_SETS = CACHE_SIZE / (BLOCK_SIZE * ASSOCIATIVITY);
localparam INDEX_WIDTH = $clog2(NO_OF_SETS);
localparam TAG_WIDTH = ADDRESS_LENGTH - (BLOCK_WIDTH + INDEX_WIDTH);

//cache data arrays
// reg [31:0] cache_data [0:NO_OF_SETS-1][0:ASSOCIATIVITY-1][0:BLOCK_SIZE/4-1];
reg [31:0] cache_data [0:NO_OF_SETS-1][0:ASSOCIATIVITY-1];
reg [TAG_WIDTH - 1:0] cache_tags [0:NO_OF_SETS-1][0:ASSOCIATIVITY-1];
reg [ASSOCIATIVITY-1:0] valid [0: NO_OF_SETS-1];
reg [ASSOCIATIVITY-1:0] dirty [0: NO_OF_SETS-1];
reg [ASSOCIATIVITY-1:0] lru_counter [0: NO_OF_SETS-1];

// Helper function to find the least recently used block in a set NOT AUTOMAIC (blocking)
// automatic fucntion is non blocking, non automatics are blocking
function integer get_lru_way(input integer set_index);
integer lru_way = 0;
begin
for (integer i = 0; i < ASSOCIATIVITY; i++) begin
if (lru_counter[set_index][i] < lru_counter[set_index][lru_block])
lru_way = i;
end

return lru_way;
end
endfunction



integer set;
integer way;
integer hit_way = -1;

always @(posedge clk) begin

if (reset) begin
for (set = 0; set < NO_OF_SETS; set = set + 1) begin
for (way = 0; way < ASSOCIATIVITY; way = way + 1) begin
lru_counter[set][way] <= way; // Initialize the LRU counter to ascending values
dirty [set][way] <= 'd0;
end
end
end


else if(write_enable) begin
set = (request_address / BLOCK_SIZE) % NO_OF_SETS;
way = get_lru_block(lru_counter[set]);
if(cache_tags[set][way] == request_address [31: 32- TAG_WIDTH]) begin
cache_data[set][way] <= write_data;
dirty[set][way] <= 1'b1;
lru_counter[set][way] <= 'd0;
end
else begin
// Determine the set and way based on the request address
set = (request_address / BLOCK_SIZE) % NO_OF_SETS;
way = get_lru_block(lru_counter[set]);

// Check if the cache line is dirty
if (dirty[set][way]) begin

// --->>Write the cache data back to the main memory at the corresponding address

// main_memory[cache_tags[set][way]] <= cache_data[set][way];
dirty[set][way] <= 1'b0; // Mark as not dirty after write-back
end

// Update the cache with the new data and mark as dirty
cache_data[set][way] <= write_data;
dirty[set][way] <= 1'b1;
lru_counter[set][way] <= 'd0;
end

end


else begin
set = (request_address / BLOCK_SIZE) % NO_OF_SETS;

// Find the hit way if it exists
for (way = 0; way < ASSOCIATIVITY; way = way + 1) begin
if (cache_tags[set][way] == request_address[31:32 - TAG_WIDTH]) begin
response_data <= cache_data[set][way];
lru_counter[set][way] <= 'd0;
hit_way = way;
end
end

// Handle cache miss
if (hit_way == -1) begin
// Here, you would typically fetch the data from a higher-level memory (e.g., main memory) and update the cache.
// You would also update cache_tags, cache_data, and LRU counters.

// For simplicity, let's assume you have fetched the data from memory and now need to update the cache.

// Set the LRU counter for the way that will receive the new data to 0 (it's the most recently used).
// lru_counter[set][way_to_replace] <= 'd0;

// Update cache_tags with the new tag for the way that received the data.
// cache_tags[set][way_to_replace] <= request_address[31:32 - TAG_WIDTH];

// Update cache_data with the fetched data.
// cache_data[set][way_to_replace] <= fetched_data;

// Now, way_to_replace holds the index of the way that received the new data.
// response_data <= fetched_data;
end

// Update LRU counters for other ways in the set
for (way = 0; way < ASSOCIATIVITY; way = way + 1) begin
if (way != hit_way) begin
if (lru_counter[set][way] < lru_counter[set][hit_way]) begin
lru_counter[set][way] <= lru_counter[set][way] + 1;
end
end
end
end


end
endmodule