Skip to content

yankeexe/llm-function-calling-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

𝑓 Function Calling Demo Application

Demo function calling app for the YouTube video.

Watch the video 👇

🔨 Setting up locally

Create virtualenv and install dependencies.

This step is not required if you are running in docker.

make setup

⚡️ Running the application

Make sure you have Ollama installed and running on your machine.

By default, the app uses mistral-nemo model but you can use Llama3.1 or Llama3.2.

Download these models before running the application. Update app.py to change the model if necessary.

Running locally

make run

Running in a container

make run-docker
⚠️ Does not work with Linux 🐧

Application running inside of the container uses a special DNS name host.docker.internal to communicate with Ollama running on the host machine.

However, this DNS name is not resolvable in Linux.

✨ Linters and Formatters

Check for linting rule violations:

make check

Auto-fix linting violations:

make fix

🤸‍♀️ Getting Help

make

# OR

make help