Scale your AI applications with orchestrated autonomous agents
PilottAI is a Python framework for building autonomous multi-agent systems with advanced orchestration capabilities. It provides enterprise-ready features for building scalable AI applications.
-
π€ Hierarchical Agent System
- Manager and worker agent hierarchies
- Intelligent task routing
- Context-aware processing
-
π Production Ready
- Asynchronous processing
- Dynamic scaling
- Load balancing
- Fault tolerance
- Comprehensive logging
-
π§ Advanced Memory
- Semantic storage
- Task history tracking
- Context preservation
- Knowledge retrieval
-
π Integrations
- Multiple LLM providers (OpenAI, Anthropic, Google)
- Document processing
- WebSocket support
- Custom tool integration
pip install pilott
from pilott import Serve
from pilott.core import AgentConfig, AgentRole, LLMConfig
# Configure LLM
llm_config = LLMConfig(
model_name="gpt-4",
provider="openai",
api_key="your-api-key"
)
# Setup agent configuration
config = AgentConfig(
role="processor",
role_type=AgentRole.WORKER,
goal="Process documents efficiently",
description="Document processing worker",
max_queue_size=100
)
async def main():
# Initialize system
pilott = Serve(name="DocumentProcessor")
try:
# Start system
await pilott.start()
# Add agent
agent = await pilott.add_agent(
agent_type="processor",
config=config,
llm_config=llm_config
)
# Process document
result = await pilott.execute_task({
"type": "process_document",
"file_path": "document.pdf"
})
print(f"Processing result: {result}")
finally:
await pilott.stop()
if __name__ == "__main__":
import asyncio
asyncio.run(main())
Visit our documentation for:
- Detailed guides
- API reference
- Examples
- Best practices
-
π Document Processing
# Process PDF documents result = await pilott.execute_task({ "type": "process_pdf", "file_path": "document.pdf" })
-
π€ AI Agents
# Create specialized agents researcher = await pilott.add_agent( agent_type="researcher", config=researcher_config )
-
π Task Orchestration
# Orchestrate complex workflows task_result = await manager_agent.execute_task({ "type": "complex_workflow", "steps": ["extract", "analyze", "summarize"] })
# Store and retrieve context
await agent.enhanced_memory.store_semantic(
text="Important information",
metadata={"type": "research"}
)
# Configure load balancing
config = LoadBalancerConfig(
check_interval=30,
overload_threshold=0.8
)
# Configure fault tolerance
config = FaultToleranceConfig(
recovery_attempts=3,
heartbeat_timeout=60
)
pilott/
βββ core/ # Core framework components
βββ agents/ # Agent implementations
βββ memory/ # Memory management
βββ orchestration/ # System orchestration
βββ tools/ # Tool integrations
βββ utils/ # Utility functions
We welcome contributions! See our Contributing Guide for details on:
- Development setup
- Coding standards
- Pull request process
- π Documentation
- π¬ Discord
- π GitHub Issues
- π§ Email Support
PilottAI is MIT licensed. See LICENSE for details.