Llama CPP as encoder #426
fabiomatricardi
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I really like the repo!
My idea/suggestion is to use GGUF versions also of the encoders/embeddings. In fact llama-cpp-python has full classes for embeddings model too like all-MiniLM-L6-v2...
Is it already a feature available?
It would be great to no need all the time 2Gb of pytorch dependencies in a venv.
thanks
Beta Was this translation helpful? Give feedback.
All reactions