Replies: 1 comment
-
Hello @praj-cs! Do you mean finetuning with extractThinker? Thats not possible yet :) But using finetuned models, that you already have, yes! Are you using Ollama? https://enoch3712.github.io/ExtractThinker/examples/local-processing/ The llm component uses Litellm under the hood, so if you are using something else, make sure you check the docs: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, How can i use a Finetuned LLM vision model for the extraction?
Beta Was this translation helpful? Give feedback.
All reactions