Skip to content

Conversation

@highdealist
Copy link

Interactive Ollama batch tester that streams responses—including "thinking" blocks when supported—across either a custom model list or all locally available models. Features live output per model, automatic fallback if target models missing, and optional multi-response synthesis (commented out). Runs in a REPL loop until user exits with "X". Designed for rapid comparative evaluation of local LLMs.

Interactive Ollama batch tester that streams responses—including "thinking" blocks when supported—across either a custom model list or all locally available models. Features live output per model, automatic fallback if target models missing, and optional multi-response synthesis (commented out). Runs in a REPL loop until user exits with "X". Designed for rapid comparative evaluation of local LLMs.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant