Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache filters #3150

Open
szuecs opened this issue Jul 10, 2024 · 2 comments
Open

Cache filters #3150

szuecs opened this issue Jul 10, 2024 · 2 comments

Comments

@szuecs
Copy link
Member

szuecs commented Jul 10, 2024

Is your feature request related to a problem? Please describe.

Caching API requests for some time defined.

Describe the solution you would like

I imagine cache("lru", "1m") to cache for a given request the response for 1 minute and respond with the same headers and body as the original response. It would use "lru" as eviction policy.
It should somehow work similar to more solid cache servers like Varnish.

I think we should support multiple algorithms to do cache eviction #956
Later we can also support a groupcache to serve with all skipper fleet so having a huge cache size as memory and use something like consistent hash to find the location of the data #628.

Describe alternatives you've considered (optional)

CDN or static blob storage (e.g. S3), but both are not dynamic.
Varnish has nice caching capabilities.

@AlexanderYastrebov
Copy link
Member

We have Redis integration so maybe we can implement it using redis commands or script (think redis filter).

@szuecs
Copy link
Member Author

szuecs commented Jul 11, 2024

Yes maybe, but this issue does not say anything about implementation specifics, so could be redis but does not have to.
In our case it likely makes sense to use the memory of skipper-ingress nodes which are big compared to redis nodes and do not really use much memory. Technically it makes sense to do this in skipper code rather than in redis with lua script or commands. Of course there are a lot of algorithms like expiration already in redis that cry for re-use.
If you would ask me, I would try to be as close to varnish as possible (does not mean redis or not redis), because that's likely the best OSS cache server available and they are around for a while so we can learn from them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants