-
Notifications
You must be signed in to change notification settings - Fork 372
Warm up runpod workers #90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I'm not an expert, but one potential workaround could be to run a Python script that preloads the model into RAM before starting ComfyUI. Note that if you're using RunPod serverless, what you refer to as a "new worker" is essentially a worker with the Dockerfile pre-loaded on it. However, nothing is actually loaded onto the machine except for the Dockerfile. The "new worker" isn't online yet. It's still offline. As a result, preloading anything other than the Dockerfile isn't possible. You might want to set an "active worker" that will always run and have the model loaded on it. |
Hey, I'm interested in doing some warmup exercise as well! I tried adding Maybe some kind of external pings to the endpoint? But there's no guarantee it'll hit a "cold" worker 🤔 Also just wanted to ask, how are you guys getting the models into the container? I tried adding it in the Dockerfile, but the build just takes extremely long (1 or 2 plus hours!), so I ended up using a network volume but that restricts my workers to one DC and limits availability... |
I have resorted to using active workers as I couldn't figure out a solution. As I'm only using SD1.5 I can get away with a low VRAM GPU, plus it's 30% discount for active workers. yeah I bake the models into the dockerfile - takes a while to build, but then I believe it's faster to run over a network volume. |
ComfyUI on Runpod Serverless is a pretty dumb idea. ComfyUI takes several minutes to boot up which makes it not suitable for any serverless architecture. |
What is the point of using serverless if your workers are always active? Runpod Serverless pricing is expensive than their normal on-demand spot on instances. Raising price and then giving a 30% discount. 👎 |
Hi guy's ! Did you find a solution easy to setup ? Need 1-2 requests after X time for warm up my workers which is not suitable for my app |
I believe Runpod are working on a solution, hopefully should be out soon |
Your approach to serverless seems a bit incorrect. The principle of serverless is to provide an on-demand solution for requests and request surges. There's no need to scale your GPUs manually—serverless "spins up" machines for you when the number of requests increases. However! If you don’t have traffic, you can’t "warm up" your worker because serverless workers (and GPUs) are shared among all users. This means that the model you want preloaded on your worker may be replaced (i.e., unloaded) by another user’s workload when no requests are received on your instance for some time. In this case, your GPU goes into throttle mode, meaning it is being used by another user. The only solutions to this problem are:
|
it works for me, but maybe it isn't optimal. I have several active workers as I have constant requests throughout the day - this ensures 90% of my customers receive a fast response. Then, when there's a surge, my idle workers will boot up. Obviously there's a delay while the worker loads the models, but it isn't tooo bad. However, Runpod say they will deploy a 'priority flashboot' that will preload models etc once the worker is assigned. So any idle assigned workers should run comfyui instantly. |
good news, hope they will implement this feature soon |
Problem: When a new worker gets assigned the first run always takes significantly longer due to models needing to be loaded into the GPU memory (around 45 secs in my case). Subsequent runs are much faster as the models have been cached (15 secs in my case). Runpod is pretty good at caching but workers do come and go and occasionally I'll have to wait a long time for my image if I'm dealing with a new worker.
It would be great if every time a worker gets assigned, they automatically run a warm up workflow that caches the models. That way, all API calls would be rapid.
I just can't figure out how to trigger the workflow once the docker image has been pulled.
The text was updated successfully, but these errors were encountered: