Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple Model Support #4

Open
notasquid1938 opened this issue Sep 22, 2024 · 1 comment
Open

Multiple Model Support #4

notasquid1938 opened this issue Sep 22, 2024 · 1 comment

Comments

@notasquid1938
Copy link

This is a great set of scripts! I was wondering if there was a way to modify the config.toml to list multiple models for benchmarking. That way the script doesn't have to be rerun for every new model.

@kth8
Copy link

kth8 commented Oct 10, 2024

My solution so far has just been to use a while loop for example:

curl -s https://ollama.com/library/llama3.2 | awk -F'["/]' '/700/ && $4 ~ /:/ && $4 ~ /q/ && $4 !~ /base|text|fp16|q4_0|q4_1|q5_0|q5_1/ { print $4 }' > models_list.txt
while read line; do ollama pull $line; pipenv run python run_openai.py --model $line; done < models_list.txt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants