Demo function calling app for the YouTube video.
Watch the video 👇
Create virtualenv and install dependencies.
This step is not required if you are running in docker.
make setup
Make sure you have Ollama installed and running on your machine.
By default, the app uses mistral-nemo model but you can use Llama3.1 or Llama3.2.
Download these models before running the application. Update app.py to change the model if necessary.
make run
make run-docker
⚠️ Does not work with Linux 🐧
Application running inside of the container uses a special DNS name host.docker.internal
to communicate with Ollama running on the host machine.
However, this DNS name is not resolvable in Linux.
Check for linting rule violations:
make check
Auto-fix linting violations:
make fix
make
# OR
make help