Skip to content

This project demonstrates function-calling with Python and Ollama, utilizing the Africa's Talking API to send airtime and messages to phone numbers using natural language prompts. Ollama + LLM w/ functions + Natural language = User Interface for non-coders.

License

Notifications You must be signed in to change notification settings

Shuyib/tool_calling_api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

61 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Exploring function calling 🗣️ 🤖 with Python and ollama 🦙

Function-calling with Python and ollama. We are going to use the Africa's Talking API to send airtime and messages to a phone number using Natural language. Thus, creating an generative ai agent. Here are examples of prompts you can use to send airtime to a phone number:

  • Send airtime to xxxxxxxxx2046 and xxxxxxxxx3524 with an amount of 10 in currency KES
  • Send a message to xxxxxxxxx2046 and xxxxxxxxx3524 with a message "Hello, how are you?", using the username "username".

NB: The phone numbers are placeholders for the actual phone numbers. You need some VRAM to run this project. You can get VRAM from here We recommend 400MB-8GB of VRAM for this project. It can run on CPU however, I recommend smaller models for this.

Mistral 7B, llama 3.2 3B/1B, Qwen 2.5: 0.5/1.5B, nemotron-mini 4b and llama3.1 8B are the recommended models for this project.

Ensure ollama is installed on your laptop/server and running before running this project. You can install ollama from here
Learn more about tool calling https://gorilla.cs.berkeley.edu/leaderboard.html

Table of contents

File structure

.
├── Dockerfile - template to run the project in one shot.
├── docker-compose.yml - use the codecarbon project and gradio dashboard.
├── app.py - the function_call.py using gradio as the User Interface.
├── Makefile - This file contains the commands to run the project.
├── README.md - This file contains the project documentation. This is the file you are currently reading.
├── requirements.txt - This file contains the dependencies for the project.
├── summary.png - How function calling works with a diagram.
└── utils - This directory contains the utility files for the project.
├── init.py - This file initializes the utils directory as a package.
├── function_call.py - This file contains the code to call a function using LLMs.
└── communication_apis.py - This file contains the code to do with communication apis & experiments.

Installation

The project uses python 3.12. To install the project, follow the steps below:

  • Clone the repository
git clone
  • Change directory to the project directory
cd tool_calling_api

Create a virtual environment

python3 -m venv venv

Activate the virtual environment

source venv/bin/activate

Confirm if steps of Makefile are working

make -n
  • Install the dependencies
make install
  • Run the project
make run

Long way to run the project

  • Change directory to the utils directory
cd utils
  • Run the function_call.py file
python function_call.py
  • Run the Gradion UI instead
python ../app.py

Run in Docker

To run the project in Docker, follow the steps below:

NB: You'll need to have deployed ollama elsewhere as an example here or here. Make edits to the app.py file to point to the ollama server. You can use the OpenAI SDK to interact with the ollama server. An example can be found here.

  • Linting dockerfile
make docker_run_test
  • Build the Docker image
make docker_build
  • Run the Docker image
make docker_run

Run in runpod.io

Make an account if you haven't already. Once that's settled.

  • Click on Deploy under Pods.
  • Select the cheapest option pod to deploy for example RTX 2000 Ada.
  • This will create a jupyter lab instance.
  • Follow the Installation steps in the terminal available. Until the make install.
  • Run this command. Install ollama and serve it then redirect output to a log file.
curl -fsSL https://ollama.com/install.sh | sh && ollama serve > ollama.log 2>&1 &
  • Install your preferred model in the same terminal.
ollama run qwen2.5:0.5b
  • Export your credentials
export AT_API_KEY=yourapikey
  • Continue running the installation steps in the terminal.
  • Send your first message and airtime with an LLM. 🌠

Read more about setting up ollama and serveless options https://blog.runpod.io/run-llama-3-1-405b-with-ollama-a-step-by-step-guide/ & https://blog.runpod.io/run-llama-3-1-with-vllm-on-runpod-serverless/

Usage

This project uses LLMs to send airtime to a phone number. The difference is that we are going to use the Africa's Talking API to send airtime to a phone number using Natural language. Here are examples of prompts you can use to send airtime to a phone number:

  • Send airtime to xxxxxxxxxx046 and xxxxxxxxxx524 with an amount of 10 in currency KES.
  • Send a message to xxxxxxxxxx046 and xxxxxxxxxx524 with a message "Hello, how are you?", using the username "username".

Process Summary

Use cases

* Non-Technical User Interfaces: Simplifies the process for non-coders to interact with APIs, making it easier for them to send airtime and messages without needing to understand the underlying code.    
* Customer Support Automation: Enables customer support teams to quickly send airtime or messages to clients using natural language commands, improving efficiency and response times.    
* Marketing Campaigns: Facilitates the automation of promotional messages and airtime rewards to customers, enhancing engagement and retention.    
* Emergency Notifications: Allows rapid dissemination of urgent alerts and notifications to a large number of recipients using simple prompts.    
* Educational Tools: Provides a practical example for teaching how to integrate APIs with natural language processing, which can be beneficial for coding bootcamps and workshops.    

Contributing

Contributions are welcome. If you would like to contribute to the project, you can fork the repository, create a new branch, make your changes and then create a pull request.

License

License information.

About

This project demonstrates function-calling with Python and Ollama, utilizing the Africa's Talking API to send airtime and messages to phone numbers using natural language prompts. Ollama + LLM w/ functions + Natural language = User Interface for non-coders.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published