AgentForce: A Production-Ready Framework for Building AI Agents
Documentation | Examples | Quick Start
AgentForce is a powerful, open-source framework designed for building production-ready AI agents. It simplifies the integration of various LLMs and tools, enabling you to create sophisticated AI applications with minimal setup.
- 🤖 Multiple LLM Support: OpenAI, Cohere Command (R/R+), and LlamaCPP integration
- 🛠️ Tool Integration: Seamlessly add capabilities like web search, weather data, and image analysis
- 👁️ Multi-modal Support: Process both text and images in your AI workflows
- 🚀 Production Ready: Built with scalability and reliability in mind
- 📦 Easy to Extend: Simple API for adding custom tools and LLM providers
# Install from PyPI (recommended)
pip install agentforce
# Install latest from GitHub
pip install git+https://github.com/gradsflow/agentforce.git@main
# Development installation
git clone https://github.com/gradsflow/agentforce.git
cd agentforce
pip install -e .
Here's a simple example using AgentForce with weather data:
from agentforce.llms import LlamaCppChatCompletion
from agentforce.tools import get_current_weather
from agentforce.tool_executor import need_tool_use
# Initialize LLM with weather tool
llm = LlamaCppChatCompletion.from_default_llm(n_ctx=0)
llm.bind_tools([get_current_weather])
# Create a simple query
messages = [
{"role": "user", "content": "How is the weather in London today?"}
]
# Get response and handle tool usage
output = llm.chat_completion(messages)
if need_tool_use(output):
tool_results = llm.run_tools(output)
updated_messages = messages + tool_results
updated_messages.append({
"role": "user",
"content": "Summarize the weather information."
})
output = llm.chat_completion(updated_messages)
print(output.choices[0].message.content)
Create agents that can understand and process both text and images:
from agentforce.llms import LlamaCppChatCompletion
from agentforce.tools import wikipedia_search, google_search, image_inspector
# Initialize LLM with multiple tools
llm = LlamaCppChatCompletion.from_default_llm(n_ctx=0)
llm.bind_tools([google_search, wikipedia_search, image_inspector])
# Process image and generate response
image_url = "https://example.com/image.jpg"
messages = [
{"role": "system", "content": "You are a helpful assistant that can analyze images."},
{"role": "user", "content": f"What can you tell me about this image? {image_url}"}
]
output = llm.chat_completion(messages)
tool_results = llm.run_tools(output)
final_output = llm.chat_completion(messages + tool_results)
For detailed documentation and advanced usage examples, visit our Documentation.
We welcome contributions of all kinds! Whether it's:
- 📝 Improving documentation
- 🐛 Bug fixes
- ✨ New features
- 🔧 Tool integrations
Check out our Contributing Guidelines to get started.
We are committed to fostering an open and welcoming environment. Please read our Code of Conduct.
Built with ❤️ using PyCharm
Special thanks to JetBrains for their support!
This project is licensed under the Apache License.