Date | |
---|---|
9 November 2023 | Assigned |
17 November 2023 | Due, end of lab |
Status |
- describe the purpose and effects of using an on-board two-way associative system cache memory
- implement program timing to draw conclusions about program performance
- analyze programmatic use or purposeful disuse of a cache
- summarize benefits and/or drawbacks using a cache
- suggest situations in which cache use is not preferable
This program is located in hello_cache/main.c.
We've looked at this program...how many times?! The best part: it keeps getting more interesting. In this case, we're going to learn a few things about where programs are actually stored and how we can begin to understand program performance more intuitively.
This program is located in space_is_the_place/main.c.
Our programs benefit from access to some shared system resources that make fast execution possible. We'll learn more about how one shared resource
(the cache
) works, and look at a brief example why it's important and how concepts like temporality
and locality
work.
Complete the programs in:
and use them to answer the questions in docs/report.md. Steps for each are included below.
As the saying goes, there's something wrong in the state of moonrock. One of our devices appears a little broken. Can you tell us what's wrong?
- Run the program contained in the
bad_blink
folder; it seems that something is a little slow -- what is it? - Review the code and respond to the relevant questions in docs/report.md
- After reviewing the code and writing a bit, how would you fix it?
Hint: here, we're thinking largely about temporality; if we're not caching the entire sequence of instructions/data in the
for
loop, how might that affect a program's ability to function efficiently?
A step-up from traditional arrays, matrix_mania
engages in _two-dimensional array_s. We're escaping our one-dimensional world and stepping into row-major
and column-major
territory. There are two functions that average a randomly-generated matrix. They do the same thing, essentially, but in very slightly different -- but consequential -- ways. Your goal(s):
- time each function
- determine the basic cache hit and miss rates for these functions, and
- add them to the table in docs/report.md
From this example, you should be able to draw some conclusions about why one function is much more efficient and faster than the other.
Hint: this has to do with data locality; is it possible that some of the data is harder to group due to proximity from each other?
See the Suggestions
below to challenge yourself to implement a Hack. As always, you are allowed to develop
your own Hack to satisfy this stretch goal. Place the code for the Hack inline with the code in the corresponding
file.
In order to recieve credit for the Hack, you must fill out the hack.md file located in the
docs
folder.
Our space_is_the_place
sum_array
function is pretty standard. But what do we notice about the numbers
array? Could we possibly rewrite some portion of to make it perform just a little better?
Hint: Here, we might think about data types; is there a better or worse data type for our
numbers
array?
- Use
averageMat_v1
for this hack.
First, let's figure out the maximum amount of elements we think we can fit in our matrix. Calculate this for the heap
size that's free at the beginning
of program execution. getTotalHeap
and getFreeHeap
are both available to you.
There is a benefit to not caching our information -- we gain space. For example, you might figure out how many more rows and columns we can add to our
matrix. Write a func_ptr_t
for averageMat_v1
to place it out of cacheable range. Do we gain space? If so, how much? Recall that our cache is 16K
.
Based on your system setup (refer to your hello-blinky
assignment), you will need switch out the .vscode
folder in each exercise with the last working copy.
See our wiki's entry on "Configuring Assignments" for more information.