Replies: 3 comments 1 reply
-
Ok, |
Beta Was this translation helpful? Give feedback.
-
Added #298 to track this. |
Beta Was this translation helpful? Give feedback.
-
Are you still interested in taking a different mounting strategy and if so can you provide some more details on what it would solve for you over using the existing implementation? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm trying to use kubeai to set up a bunch of LLMs on our kubernetes cluster.
I would like to have the models stored locally on an NFS file system, which has a fast connection to the nodes, so would be suitable to be used as model storage.
I've tried setting:
and
cacheProfile: standard-filestore
in the model configs, but this generates a single volume claim with 10Gigabyte, which is never enough to host the models.I would really like to avoid having to re-downoad the models or hosting them on the individual node drives.
When I used to run vllm myself, I simply had a pvc that was mounted to
/container-home/
on the vllm container and definedHF_HOME
on the containers ascontainer-home/huggingface
and setVLLM_CONFIG_ROOT
to/container-home/config
, this allowed me to have different containers access the same model files, and avoid unnecessary storage on local machines.Is something like this possible with kubeai and if so, is there an example for this?
Beta Was this translation helpful? Give feedback.
All reactions