Skip to content

Commit

Permalink
Running with local ollama example
Browse files Browse the repository at this point in the history
  • Loading branch information
tijszwinkels committed Jul 10, 2024
1 parent 0f0b5b0 commit b7d3887
Showing 1 changed file with 18 additions and 1 deletion.
19 changes: 18 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,24 @@ export OPENAI_BASE_URL="https://your-api-provider.com/v1"
export OPENAI_API_KEY="your-api-key-here"
```

This way, any 3rd party API can be used, including local models.
This way, any 3rd party API can be used, such as OpenRouter, Groq, local models, etc.

### Ollama

For exmaple, to run the bot using Ollama:

1. Set up the environment:

```
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_API_KEY=ollama
```

2. Run the bot command:

```
python bot.py --model llama3 --reference-models llama3 --reference-models mistral
```

## Evaluation

Expand Down

0 comments on commit b7d3887

Please sign in to comment.