Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance issue #271

Open
Napolitain opened this issue Sep 10, 2024 · 2 comments
Open

Performance issue #271

Napolitain opened this issue Sep 10, 2024 · 2 comments

Comments

@Napolitain
Copy link

Napolitain commented Sep 10, 2024

Comparing functools.cache and cashews in mem cache:

  • functools takes 500 ns to return (sync function)
  • cashews takes 75000 ns to return (async function)

for reference I used that for Cashews

def function_sync(...):
    slow_stuff

@cache(key="completion:{prompt}:{system_prompt}:{completion_mode}", ttl="1d")
async def function(...):
    return await asyncio.to_thread(function_sync, ...)

and that for functools (no LRU)

@cache
def function_sync(...):
    slow_stuff

My goal was mostly to allow to do asyncio.gather(f1, f2...)

Do you know if timings can be improved ? It is not that important to me for now, as ns and us dont really matter in my kind of apps.
But I see you mention FastAPI, which definitely should support fastest possible caching.
I assume the slower timings is mostly implied by the expiration algorithms right?

@Krukov
Copy link
Owner

Krukov commented Sep 14, 2024

Hello,

I thought it is normal that cashews is so slow compare to functools.cache (LRU by the way), because there are a big difference between it. Functools cache don't have:

  • a ttl
  • an execution control - with cashews you can disable cache with one line
  • Middlewares
  • a key control
  • different backends

The main goal of Cashews is to provide a better interface for using cache in a project, and the focus is NOT on in-memory cache, but on Redis. FastAPI is just the name of the framework, it is based on Starlette, and Starlette is probably faster. So cashews as a web-framework agnostic cache framework wants to be fast but in comparison with frameworks offering similar functionality.

For better performance I can suggest :

  1. do not use python
  2. stay with functools.cache
  3. use low level api :
import time
import asyncio
from decimal import Decimal
from functools import cache as fcache

from cashews.backends.memory import Memory

cache = Memory()
READ_COUNT = 100


async def func(a):
    key = f"k:{a}"
    cashed = await cache.get(key)
    if cashed:
        return cashed
    await asyncio.sleep(0.1)
    await cache.set(key, Decimal("10"), expire=100)
    return Decimal("10")


@fcache
def ffunc(a):
    time.sleep(0.1)
    return Decimal("10")


async def measure_cashews():
    # await cache.clear()

    start = time.perf_counter()
    await func(10)
    write = time.perf_counter()
    print("cashews: write",  (write-start) - 0.1)
    for _ in range(READ_COUNT):
        await func(10)
    print("cashews: done",  (time.perf_counter()-write))


def measure_functools():
    start = time.perf_counter()
    ffunc(10)
    write = time.perf_counter()
    print("functools: write", (write - start) - 0.1)

    for _ in range(READ_COUNT):
        ffunc(10)
    print("functools: done", (time.perf_counter() - write))


asyncio.run(measure_cashews())
measure_functools()

@Krukov
Copy link
Owner

Krukov commented Oct 4, 2024

JFYI I tried to dig deeper into why the performance is so bad. I found a few issues that I quickly fixed (here) and a few more that could be fixed but with a lot of changes. I plan to improve the performance in the next major version

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants