A powerful and simple REST API service built with FastAPI that provides conversational AI capabilities using OpenAI's GPT-4o-mini model through LangGraph. This service enables seamless integration of AI chat functionality into web applications, mobile apps, and other services.
- π€ Intelligent Chat: Powered by OpenAI's GPT-4o-mini for high-quality conversations
- π¬ Session Management: Maintain conversation context across multiple requests
- π₯ Health Monitoring: Built-in health check endpoints for monitoring
- π‘οΈ Robust Error Handling: Comprehensive error handling with standardized responses
- π Auto Documentation: FastAPI automatic OpenAPI documentation at
/docs
- π§ Environment Configuration: Fully configurable via environment variables
- β‘ High Performance: Built with FastAPI for optimal performance
- π§Ή Auto Cleanup: Automatic cleanup of expired sessions
- π Secure: Environment-based API key management
- Python 3.8 or higher
- OpenAI API key
- pip package manager
# If you have the project files, navigate to the project directory
cd chatbot_api
pip install -r requirements.txt
The project includes a .env
file with your OpenAI API key. You can modify other settings:
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_MODEL=gpt-4o-mini
OPENAI_TEMPERATURE=0
SESSION_TIMEOUT_HOURS=1
HOST=0.0.0.0
PORT=8000
DEBUG=false
Option 1: Using the main script (Recommended)
python main.py
Option 2: Using uvicorn directly
uvicorn main:app --host 0.0.0.0 --port 8000 --reload
Once the server starts, you should see:
π Starting LangGraph Chatbot API Service...
- Host: 0.0.0.0
- Port: 8000
- Debug: False
- Docs: http://0.0.0.0:8000/docs
β
Chatbot API Service initialized successfully
- Model: gpt-4o-mini
- Temperature: 0.0
- Session timeout: 1 hours
- API Base URL: http://localhost:8000
- Interactive Documentation: http://localhost:8000/docs
- Health Check: http://localhost:8000/api/health
- API Info: http://localhost:8000/
http://localhost:8000
No authentication required for basic usage. API key is configured server-side.
POST /api/chat
Send a message to the AI chatbot and receive an intelligent response.
{
"message": "Hello, how are you?",
"session_id": "optional-session-id"
}
message
(string, required): User message to send to the chatbot (1-4000 characters)session_id
(string, optional): Session ID to continue an existing conversation
{
"response": "I'm doing well, thank you! How can I help you today?",
"session_id": "generated-or-provided-session-id",
"timestamp": "2024-01-15T10:30:00Z"
}
curl -X POST http://localhost:8000/api/chat \
-H "Content-Type: application/json" \
-d '{"message": "Explain quantum computing in simple terms"}'
POST /api/sessions
Create a new chat session for maintaining conversation context.
{}
{
"session_id": "unique-session-identifier",
"created_at": "2024-01-15T10:30:00Z"
}
curl -X POST http://localhost:8000/api/sessions
GET /api/sessions/{session_id}
Retrieve the complete conversation history for a specific session.
session_id
(string, required): The session identifier
{
"session_id": "session-identifier",
"messages": [
{
"role": "user",
"content": "Hello",
"timestamp": "2024-01-15T10:30:00Z"
},
{
"role": "assistant",
"content": "Hi! How can I help you?",
"timestamp": "2024-01-15T10:30:01Z"
}
],
"created_at": "2024-01-15T10:30:00Z"
}
curl http://localhost:8000/api/sessions/your-session-id-here
GET /api/health
Check the health and status of the API service.
{
"status": "healthy",
"timestamp": "2024-01-15T10:30:00Z",
"version": "1.0.0"
}
curl http://localhost:8000/api/health
GET /
Get basic information about the API service.
{
"message": "LangGraph Chatbot API Service",
"version": "1.0.0",
"docs": "/docs",
"health": "/api/health",
"timestamp": "2024-01-15T10:30:00Z"
}
Variable | Default | Description |
---|---|---|
OPENAI_API_KEY |
- | Your OpenAI API key (required) |
OPENAI_MODEL |
gpt-4o-mini |
OpenAI model to use |
OPENAI_TEMPERATURE |
0 |
Model temperature (0-2) |
SESSION_TIMEOUT_HOURS |
1 |
Session timeout in hours |
HOST |
0.0.0.0 |
Server host address |
PORT |
8000 |
Server port |
DEBUG |
false |
Enable debug mode |
- Storage: In-memory storage for simplicity
- Timeout: Configurable session timeout (default: 1 hour)
- Cleanup: Automatic cleanup every 30 minutes
- Concurrency: Thread-safe for multiple concurrent users
import requests
import json
base_url = "http://localhost:8000"
# Create a new session
session_response = requests.post(f"{base_url}/api/sessions")
session_id = session_response.json()["session_id"]
print(f"Created session: {session_id}")
# Start a conversation
chat_response = requests.post(f"{base_url}/api/chat", json={
"message": "What is machine learning?",
"session_id": session_id
})
response_data = chat_response.json()
print(f"Bot: {response_data['response']}")
# Continue the conversation
follow_up = requests.post(f"{base_url}/api/chat", json={
"message": "Can you give me a simple example?",
"session_id": session_id
})
print(f"Bot: {follow_up.json()['response']}")
# Get conversation history
history = requests.get(f"{base_url}/api/sessions/{session_id}")
messages = history.json()["messages"]
print(f"Conversation has {len(messages)} messages")
const axios = require('axios');
const baseURL = 'http://localhost:8000';
async function chatWithBot() {
try {
// Create session
const sessionResponse = await axios.post(`${baseURL}/api/sessions`);
const sessionId = sessionResponse.data.session_id;
console.log(`Created session: ${sessionId}`);
// Send message
const chatResponse = await axios.post(`${baseURL}/api/chat`, {
message: "Tell me a joke about programming",
session_id: sessionId
});
console.log(`Bot: ${chatResponse.data.response}`);
// Get session history
const historyResponse = await axios.get(`${baseURL}/api/sessions/${sessionId}`);
console.log(`Messages in session: ${historyResponse.data.messages.length}`);
} catch (error) {
console.error('Error:', error.response?.data || error.message);
}
}
chatWithBot();
# Health check
curl http://localhost:8000/api/health
# Create session
SESSION_ID=$(curl -s -X POST http://localhost:8000/api/sessions | jq -r '.session_id')
echo "Session ID: $SESSION_ID"
# Send message
curl -X POST http://localhost:8000/api/chat \
-H "Content-Type: application/json" \
-d "{\"message\": \"Explain APIs in simple terms\", \"session_id\": \"$SESSION_ID\"}"
# Get session history
curl http://localhost:8000/api/sessions/$SESSION_ID
All API errors follow a standardized format for consistent error handling:
{
"error": {
"code": "ERROR_CODE",
"message": "Human-readable error message",
"details": "Additional error context (optional)"
},
"timestamp": "2024-01-15T10:30:00Z"
}
Status Code | Error Code | Description |
---|---|---|
400 | HTTP_400 |
Bad Request - Invalid input format or validation error |
404 | HTTP_404 |
Not Found - Session not found or expired |
500 | HTTP_500 |
Internal Server Error - Server or OpenAI API issues |
503 | HTTP_503 |
Service Unavailable - Service health check failed |
{
"error": {
"code": "HTTP_404",
"message": "Session abc123 not found or expired",
"details": null
},
"timestamp": "2024-01-15T10:30:00Z"
}
chatbot_api/
βββ main.py # FastAPI application and server configuration
βββ endpoints.py # API route handlers and business logic
βββ models.py # Pydantic models for request/response validation
βββ session_manager.py # In-memory session storage and management
βββ chatbot_service.py # LangGraph chatbot wrapper service
βββ requirements.txt # Python dependencies
βββ .env # Environment variables
βββ README.md # This documentation
βββ chatbot.ipynb # Original notebook implementation
main.py
: FastAPI application setup, lifecycle management, and global exception handlingendpoints.py
: All API endpoint definitions with request/response handlingmodels.py
: Pydantic models for data validation and serializationsession_manager.py
: Thread-safe in-memory session storage with automatic cleanupchatbot_service.py
: LangGraph integration and OpenAI API wrapper
# Enable debug mode for auto-reload and detailed error messages
DEBUG=true python main.py
- Interactive Documentation: Visit http://localhost:8000/docs
- Manual Testing: Use curl, Postman, or any HTTP client
- Health Check: Verify service status at http://localhost:8000/api/health
- Define new Pydantic models in
models.py
- Add endpoint logic to
endpoints.py
- Update the router in
main.py
if needed - Test using the interactive documentation
- Environment Variables: Use proper environment variable management
- Process Management: Use process managers like PM2, systemd, or Docker
- Reverse Proxy: Use nginx or Apache as a reverse proxy
- HTTPS: Enable SSL/TLS encryption
- Monitoring: Implement logging and monitoring solutions
- Session Storage: Consider Redis or database for persistent session storage
- Authentication: Implement API key authentication for production use
- Rate Limiting: Add rate limiting to prevent abuse
- Logging: Implement comprehensive logging
- Error Monitoring: Use services like Sentry for error tracking
- Load Balancing: Use load balancers for high availability
- CORS: Configure CORS settings appropriately
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["python", "main.py"]
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
This project is open source and available under the MIT License.
- OpenAI API Key Error: Ensure your API key is correctly set in the
.env
file - Port Already in Use: Change the PORT in
.env
or kill the process using the port - Module Not Found: Run
pip install -r requirements.txt
to install dependencies - Session Not Found: Sessions expire after 1 hour by default
- JSON Serialization Error: Ensure all datetime objects are properly serialized
- Check the interactive documentation at
/docs
- Review the health check endpoint at
/api/health
- Enable debug mode for detailed error messages
- Check the server logs for error details
Happy Coding! π
Built with β€οΈ using FastAPI, LangGraph, and OpenAI GPT-4o-mini.