- In LambdaScheduler.py, added eviction policies by setting
self.eviction_policy
in the constructor of LambdaScheduler. [Link] - In LambdaScheduler.py in the runInvocation() method, added the code to increment frequency on invocation for LFU-based policies, add to LRU cache for the LRU policy, priority calculation for DUAL_GREEDY approach. [Link]
- In Container.py, added
invoke_freq
,init_time
andpriority
for the Container object. [Link]class Container: state = "COLD" def __init__(self, lamdata: LambdaData): self.metadata = lamdata self.invoke_freq = 1 self.priority = 0 self.init_time = lamdata.run_time - lamdata.warm_time
- Below the RandomEvictionPicker function, there is a list of functions added as per the
self.eviction_policy
set containing all policy functions. [Link] - The example scripts
run_all_20.sh
,run_all_50.sh
,run_all_100.sh
make it convenient to run a, b, and c traces for 20, 50 and 100 functions together.
code/ParallelRunner.py
runs multiple instances of LambdaScheduler.py
, each of which is is a self-contained simulation.
The simulation uses Container.py
and LambdaData.py
as data classes to track goings-on inside the simulation.
Inputs to the simulator are a series of functions, and a trace of their invocations over a 24 hour period.
This second part is a simple list of LambdaData
and float time that is iterated over.
You shouldn't need to examine these pickle files directly, but you can create custom traces for debugging using the exaples in ./code/support/TraceGen.py
.
There are many example scripts in code/
for examples on how to run the simulator.
They run a specific trace we have supplied at a number of different memory levels to show how well the policy performed.
Results are then plotted for you and stored into code/figs
.
Set up debugging at the very bottom of LambdaScheduler
, with a pickle trace file or a custom trace from TraceGen.py
.
Running the LambdaScheduler.py
file will run this one trace at a specific memory size, and you can then debug it any way you wish.
- Eviction API: Which function, args,
cache_miss
- creates a newContainer
to run a function that was not pre-warmed, may evict non-running containers if necessaryEviction
- called bycache_miss
if not enough memory exists. Calls the custom eviction functionEvictionFunc
to get a list ofContainer
s to evict and removes them from theContainerPool
EvictionFunc
- a function reference set in the constructor ofLambdaScheduler
that executes a custom eviction function based oneviction_policy
- LambdaData - The information about a function: unique name, memory usage, and runtime. There are in the trace pickle file, so do not edit this class
- Container - Representing a function in memory
- c.metadata - a
LambdaData
object withe the function information
- c.metadata - a
RunningC
- A dictionary ofContainer
to a tuple holding(launch_time, finish_time)
, holding all those functions that are currently runningContainerPool
- All theContainer
objects active in the system, both running and warm
runInvocation
- the entrypoint for the schedulercleanup_finished
- removes those containers fromRunningC
that have finished running. Called inrunInvocation
before anything else is doneRemoveFromPool
- Remove aContainer
fromContainerPool
. must call this function to ensure bookkeeping is correctAddToPool
- Add aContainer
toContainerPool
. must call this function to ensure bookkeeping is correct
mem_capacity
- total memory the server has for functionsmem_size
- the amount of memory being used by functionseviction_policy
- The eviction policy being used, a stringevdict
- Accounting of the number of times each function has been evictedcapacity_misses
- functions dropped due to insufficient resources
RandomEvictionPicker
.
You will need to provide a custom eviction picker function, and in the constructor assign it to EvictionFunc
.
You should also give a policy
to go along with it, making it easy to differentiate the RAND
policy from your new policy.
runInvocation
.
This function will allow you to track each request on arrival, know if invocation was cold or warm, etc.
If you're looking to do prediction or general bookkeeping on each request, do it here.