Skip to content

A powerful and simple REST API service built with FastAPI that provides conversational AI capabilities using OpenAI's GPT-4o-mini model through LangGraph. This service enables seamless integration of AI chat functionality into web applications, mobile apps, and other services.

Notifications You must be signed in to change notification settings

Pulkit0111/ai_chatbot_api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

LangGraph Chatbot API Service

A powerful and simple REST API service built with FastAPI that provides conversational AI capabilities using OpenAI's GPT-4o-mini model through LangGraph. This service enables seamless integration of AI chat functionality into web applications, mobile apps, and other services.

πŸš€ Features

  • πŸ€– Intelligent Chat: Powered by OpenAI's GPT-4o-mini for high-quality conversations
  • πŸ’¬ Session Management: Maintain conversation context across multiple requests
  • πŸ₯ Health Monitoring: Built-in health check endpoints for monitoring
  • πŸ›‘οΈ Robust Error Handling: Comprehensive error handling with standardized responses
  • πŸ“š Auto Documentation: FastAPI automatic OpenAPI documentation at /docs
  • πŸ”§ Environment Configuration: Fully configurable via environment variables
  • ⚑ High Performance: Built with FastAPI for optimal performance
  • 🧹 Auto Cleanup: Automatic cleanup of expired sessions
  • πŸ”’ Secure: Environment-based API key management

πŸ“‹ Prerequisites

  • Python 3.8 or higher
  • OpenAI API key
  • pip package manager

πŸ› οΈ Installation & Setup

1. Clone or Download the Project

# If you have the project files, navigate to the project directory
cd chatbot_api

2. Install Dependencies

pip install -r requirements.txt

3. Configure Environment Variables

The project includes a .env file with your OpenAI API key. You can modify other settings:

OPENAI_API_KEY=your_openai_api_key_here
OPENAI_MODEL=gpt-4o-mini
OPENAI_TEMPERATURE=0
SESSION_TIMEOUT_HOURS=1
HOST=0.0.0.0
PORT=8000
DEBUG=false

4. Run the Server

Option 1: Using the main script (Recommended)

python main.py

Option 2: Using uvicorn directly

uvicorn main:app --host 0.0.0.0 --port 8000 --reload

5. Verify Installation

Once the server starts, you should see:

πŸš€ Starting LangGraph Chatbot API Service...
   - Host: 0.0.0.0
   - Port: 8000
   - Debug: False
   - Docs: http://0.0.0.0:8000/docs
βœ… Chatbot API Service initialized successfully
   - Model: gpt-4o-mini
   - Temperature: 0.0
   - Session timeout: 1 hours

6. Access the API

πŸ“– API Documentation

Base URL

http://localhost:8000

Authentication

No authentication required for basic usage. API key is configured server-side.


1. πŸ’¬ Chat Endpoint

POST /api/chat

Send a message to the AI chatbot and receive an intelligent response.

Request Body

{
  "message": "Hello, how are you?",
  "session_id": "optional-session-id"
}

Parameters

  • message (string, required): User message to send to the chatbot (1-4000 characters)
  • session_id (string, optional): Session ID to continue an existing conversation

Response

{
  "response": "I'm doing well, thank you! How can I help you today?",
  "session_id": "generated-or-provided-session-id",
  "timestamp": "2024-01-15T10:30:00Z"
}

Example Usage

curl -X POST http://localhost:8000/api/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "Explain quantum computing in simple terms"}'

2. πŸ†• Create Session

POST /api/sessions

Create a new chat session for maintaining conversation context.

Request Body

{}

Response

{
  "session_id": "unique-session-identifier",
  "created_at": "2024-01-15T10:30:00Z"
}

Example Usage

curl -X POST http://localhost:8000/api/sessions

3. πŸ“œ Get Session History

GET /api/sessions/{session_id}

Retrieve the complete conversation history for a specific session.

Parameters

  • session_id (string, required): The session identifier

Response

{
  "session_id": "session-identifier",
  "messages": [
    {
      "role": "user",
      "content": "Hello",
      "timestamp": "2024-01-15T10:30:00Z"
    },
    {
      "role": "assistant",
      "content": "Hi! How can I help you?",
      "timestamp": "2024-01-15T10:30:01Z"
    }
  ],
  "created_at": "2024-01-15T10:30:00Z"
}

Example Usage

curl http://localhost:8000/api/sessions/your-session-id-here

4. πŸ₯ Health Check

GET /api/health

Check the health and status of the API service.

Response

{
  "status": "healthy",
  "timestamp": "2024-01-15T10:30:00Z",
  "version": "1.0.0"
}

Example Usage

curl http://localhost:8000/api/health

5. ℹ️ API Information

GET /

Get basic information about the API service.

Response

{
  "message": "LangGraph Chatbot API Service",
  "version": "1.0.0",
  "docs": "/docs",
  "health": "/api/health",
  "timestamp": "2024-01-15T10:30:00Z"
}

πŸ”§ Configuration Options

Environment Variables

Variable Default Description
OPENAI_API_KEY - Your OpenAI API key (required)
OPENAI_MODEL gpt-4o-mini OpenAI model to use
OPENAI_TEMPERATURE 0 Model temperature (0-2)
SESSION_TIMEOUT_HOURS 1 Session timeout in hours
HOST 0.0.0.0 Server host address
PORT 8000 Server port
DEBUG false Enable debug mode

Session Management

  • Storage: In-memory storage for simplicity
  • Timeout: Configurable session timeout (default: 1 hour)
  • Cleanup: Automatic cleanup every 30 minutes
  • Concurrency: Thread-safe for multiple concurrent users

πŸ’‘ Usage Examples

Python Example

import requests
import json

base_url = "http://localhost:8000"

# Create a new session
session_response = requests.post(f"{base_url}/api/sessions")
session_id = session_response.json()["session_id"]
print(f"Created session: {session_id}")

# Start a conversation
chat_response = requests.post(f"{base_url}/api/chat", json={
    "message": "What is machine learning?",
    "session_id": session_id
})

response_data = chat_response.json()
print(f"Bot: {response_data['response']}")

# Continue the conversation
follow_up = requests.post(f"{base_url}/api/chat", json={
    "message": "Can you give me a simple example?",
    "session_id": session_id
})

print(f"Bot: {follow_up.json()['response']}")

# Get conversation history
history = requests.get(f"{base_url}/api/sessions/{session_id}")
messages = history.json()["messages"]
print(f"Conversation has {len(messages)} messages")

JavaScript/Node.js Example

const axios = require('axios');

const baseURL = 'http://localhost:8000';

async function chatWithBot() {
  try {
    // Create session
    const sessionResponse = await axios.post(`${baseURL}/api/sessions`);
    const sessionId = sessionResponse.data.session_id;
    console.log(`Created session: ${sessionId}`);

    // Send message
    const chatResponse = await axios.post(`${baseURL}/api/chat`, {
      message: "Tell me a joke about programming",
      session_id: sessionId
    });

    console.log(`Bot: ${chatResponse.data.response}`);

    // Get session history
    const historyResponse = await axios.get(`${baseURL}/api/sessions/${sessionId}`);
    console.log(`Messages in session: ${historyResponse.data.messages.length}`);

  } catch (error) {
    console.error('Error:', error.response?.data || error.message);
  }
}

chatWithBot();

curl Examples

# Health check
curl http://localhost:8000/api/health

# Create session
SESSION_ID=$(curl -s -X POST http://localhost:8000/api/sessions | jq -r '.session_id')
echo "Session ID: $SESSION_ID"

# Send message
curl -X POST http://localhost:8000/api/chat \
  -H "Content-Type: application/json" \
  -d "{\"message\": \"Explain APIs in simple terms\", \"session_id\": \"$SESSION_ID\"}"

# Get session history
curl http://localhost:8000/api/sessions/$SESSION_ID

🚨 Error Handling

All API errors follow a standardized format for consistent error handling:

{
  "error": {
    "code": "ERROR_CODE",
    "message": "Human-readable error message",
    "details": "Additional error context (optional)"
  },
  "timestamp": "2024-01-15T10:30:00Z"
}

Common Error Codes

Status Code Error Code Description
400 HTTP_400 Bad Request - Invalid input format or validation error
404 HTTP_404 Not Found - Session not found or expired
500 HTTP_500 Internal Server Error - Server or OpenAI API issues
503 HTTP_503 Service Unavailable - Service health check failed

Example Error Response

{
  "error": {
    "code": "HTTP_404",
    "message": "Session abc123 not found or expired",
    "details": null
  },
  "timestamp": "2024-01-15T10:30:00Z"
}

πŸ—οΈ Project Structure

chatbot_api/
β”œβ”€β”€ main.py              # FastAPI application and server configuration
β”œβ”€β”€ endpoints.py         # API route handlers and business logic
β”œβ”€β”€ models.py           # Pydantic models for request/response validation
β”œβ”€β”€ session_manager.py  # In-memory session storage and management
β”œβ”€β”€ chatbot_service.py  # LangGraph chatbot wrapper service
β”œβ”€β”€ requirements.txt    # Python dependencies
β”œβ”€β”€ .env               # Environment variables
β”œβ”€β”€ README.md          # This documentation
└── chatbot.ipynb      # Original notebook implementation

Component Overview

  • main.py: FastAPI application setup, lifecycle management, and global exception handling
  • endpoints.py: All API endpoint definitions with request/response handling
  • models.py: Pydantic models for data validation and serialization
  • session_manager.py: Thread-safe in-memory session storage with automatic cleanup
  • chatbot_service.py: LangGraph integration and OpenAI API wrapper

πŸ”§ Development

Running in Development Mode

# Enable debug mode for auto-reload and detailed error messages
DEBUG=true python main.py

Testing the API

  1. Interactive Documentation: Visit http://localhost:8000/docs
  2. Manual Testing: Use curl, Postman, or any HTTP client
  3. Health Check: Verify service status at http://localhost:8000/api/health

Adding New Features

  1. Define new Pydantic models in models.py
  2. Add endpoint logic to endpoints.py
  3. Update the router in main.py if needed
  4. Test using the interactive documentation

πŸš€ Production Deployment

Recommended Production Setup

  1. Environment Variables: Use proper environment variable management
  2. Process Management: Use process managers like PM2, systemd, or Docker
  3. Reverse Proxy: Use nginx or Apache as a reverse proxy
  4. HTTPS: Enable SSL/TLS encryption
  5. Monitoring: Implement logging and monitoring solutions

Production Considerations

  • Session Storage: Consider Redis or database for persistent session storage
  • Authentication: Implement API key authentication for production use
  • Rate Limiting: Add rate limiting to prevent abuse
  • Logging: Implement comprehensive logging
  • Error Monitoring: Use services like Sentry for error tracking
  • Load Balancing: Use load balancers for high availability
  • CORS: Configure CORS settings appropriately

Docker Deployment (Optional)

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["python", "main.py"]

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Test thoroughly
  5. Submit a pull request

πŸ“ License

This project is open source and available under the MIT License.

πŸ†˜ Troubleshooting

Common Issues

  1. OpenAI API Key Error: Ensure your API key is correctly set in the .env file
  2. Port Already in Use: Change the PORT in .env or kill the process using the port
  3. Module Not Found: Run pip install -r requirements.txt to install dependencies
  4. Session Not Found: Sessions expire after 1 hour by default
  5. JSON Serialization Error: Ensure all datetime objects are properly serialized

Getting Help

  • Check the interactive documentation at /docs
  • Review the health check endpoint at /api/health
  • Enable debug mode for detailed error messages
  • Check the server logs for error details

Happy Coding! πŸŽ‰

Built with ❀️ using FastAPI, LangGraph, and OpenAI GPT-4o-mini.

About

A powerful and simple REST API service built with FastAPI that provides conversational AI capabilities using OpenAI's GPT-4o-mini model through LangGraph. This service enables seamless integration of AI chat functionality into web applications, mobile apps, and other services.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages