You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using nodecache-as-promised to cache some values that I would otherwise need to fetch from an expensive remote. In order to share the cache between my instances, I'm using the redis persistence middleware.
My understanding at the time was that I would keep a fixed size cache in memory, and that the shared cache in redis would be evicted based on TTL. This appears to not be true, as values get deleted from redis when the in-memory cache evicts a value based on the size of the LRU cache.
The reason I'm thinking about this is because I have less available memory on my machines than I have space in Redis. So in Redis I could store maybe 500000 items, whereas on my application servers I would only want to fit maybe 100000. If a value is requested that is not in the local in-memory cache, I would be fine taking the hit to check if it is in the redis cache and then update my local in-memory cache accordingly, as that's still much cheaper than getting the value from the expensive remote.
Basically the idea is that the in-memory cache would contain a slice of the most recent part of the persistent cache.
Does this make sense? I would consider developing this, but you probably have more experience with this, so maybe there are things I'm not thinking about.
The text was updated successfully, but these errors were encountered:
I'm using nodecache-as-promised to cache some values that I would otherwise need to fetch from an expensive remote. In order to share the cache between my instances, I'm using the redis persistence middleware.
My understanding at the time was that I would keep a fixed size cache in memory, and that the shared cache in redis would be evicted based on TTL. This appears to not be true, as values get deleted from redis when the in-memory cache evicts a value based on the size of the LRU cache.
The reason I'm thinking about this is because I have less available memory on my machines than I have space in Redis. So in Redis I could store maybe 500000 items, whereas on my application servers I would only want to fit maybe 100000. If a value is requested that is not in the local in-memory cache, I would be fine taking the hit to check if it is in the redis cache and then update my local in-memory cache accordingly, as that's still much cheaper than getting the value from the expensive remote.
Basically the idea is that the in-memory cache would contain a slice of the most recent part of the persistent cache.
Does this make sense? I would consider developing this, but you probably have more experience with this, so maybe there are things I'm not thinking about.
The text was updated successfully, but these errors were encountered: