Skip to content

AI-powered tool for handling requests, simplifying workflows and automating tasks... Created at https://coslynx.com

Notifications You must be signed in to change notification settings

coslynx/AI-Powered-Request-Handler-Tool

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

20 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

AI Powered Request Handler Tool

A Python backend service that simplifies user interaction with OpenAI's API

Developed with the software and tools below.

Language: Python Framework: FastAPI Database: PostgreSQL API: OpenAI
git-last-commit GitHub commit activity GitHub top language

πŸ“‘ Table of Contents

  • πŸ“ Overview
  • πŸ“¦ Features
  • πŸ“‚ Structure
  • πŸ’» Installation
  • πŸ—οΈ Usage
  • 🌐 Hosting
  • πŸ“„ License
  • πŸ‘ Authors

πŸ“ Overview

This repository contains the backend code for the AI Powered Request Handler Tool, a Python service designed to act as a user-friendly intermediary between developers and OpenAI's API. It simplifies complex AI interactions, making advanced language processing accessible to a wider audience. This MVP addresses the growing need for user-friendly AI integration, empowering developers and users alike with a powerful, yet intuitive interface for leveraging OpenAI's capabilities.

The tool's core value proposition lies in its ability to streamline the process of sending requests to OpenAI's API and receiving processed responses. This eliminates the complexities of direct API interactions and allows users to focus on the core functionalities of their applications.

πŸ“¦ Features

Feature Description
βš™οΈ Architecture The MVP utilizes a serverless architecture with a Python backend deployed on a cloud platform (e.g., AWS Lambda, Azure Functions) triggered by user requests through either a command-line interface or API calls.
πŸ“„ Documentation This README file provides a comprehensive overview of the MVP, its dependencies, and usage instructions.
πŸ”— Dependencies The codebase relies on various external libraries and packages such as FastAPI, uvicorn, pydantic, psycopg2-binary, python-dotenv, openai, sqlalchemy, requests, pytest, docker, docker-compose, prometheus_client, gunicorn, and sentry-sdk.
🧩 Modularity The code is organized into modules for different functionalities (e.g., routers, models, schemas, services, utils, tests), promoting reusability and maintainability.
πŸ§ͺ Testing Unit tests are included for key modules like openai_service and db_service to ensure functionality and correctness.
⚑️ Performance The backend is optimized for efficient request processing and response handling, including techniques like caching frequently used API calls and minimizing unnecessary API requests.
πŸ” Security Robust authentication and authorization measures are implemented to protect API keys and user data. Secure communication protocols (HTTPS) are used for all API interactions.
πŸ”€ Version Control Utilizes Git for version control with a startup.sh script for containerized deployment.
πŸ”Œ Integrations The MVP seamlessly integrates with OpenAI's API, PostgreSQL database, and utilizes various Python libraries for HTTP requests, JSON handling, and logging.
πŸ“Ά Scalability The architecture is designed to handle increasing user load and evolving OpenAI API features. Considerations for load balancing, horizontal scaling, and efficient database management are implemented.

πŸ“‚ Structure

β”œβ”€β”€ main.py             # Main application entry point
β”œβ”€β”€ routers
β”‚   β”œβ”€β”€ requests.py     # API endpoint for handling user requests
β”‚   └── settings.py    # API endpoint for managing user settings
β”œβ”€β”€ models
β”‚   β”œβ”€β”€ request.py      # Database model for user requests
β”‚   └── settings.py     # Database model for user settings
β”œβ”€β”€ schemas
β”‚   β”œβ”€β”€ request_schema.py # Pydantic schema for validating user requests
β”‚   └── settings_schema.py # Pydantic schema for validating user settings
β”œβ”€β”€ services
β”‚   β”œβ”€β”€ openai_service.py # Service for interacting with the OpenAI API
β”‚   └── db_service.py    # Service for interacting with the database
β”œβ”€β”€ utils
β”‚   β”œβ”€β”€ logger.py       # Logging utility for the application
β”‚   β”œβ”€β”€ exceptions.py    # Custom exception classes for error handling
β”‚   └── config.py       # Configuration utility for loading environment variables
└── tests
    └── unit
        β”œβ”€β”€ test_openai_service.py # Unit tests for the openai_service module
        └── test_db_service.py     # Unit tests for the db_service module

πŸ’» Installation

πŸ”§ Prerequisites

  • Python 3.9+
  • PostgreSQL 15+
  • Docker 5.0.0+

πŸš€ Setup Instructions

  1. Clone the repository:

    git clone https://github.com/coslynx/AI-Powered-Request-Handler-Tool.git
    cd AI-Powered-Request-Handler-Tool
  2. Install dependencies:

    pip install -r requirements.txt
  3. Set up environment variables:

    • Create a .env file in the project root.
    • Add the following environment variables:
      OPENAI_API_KEY=YOUR_API_KEY
      DATABASE_URL=postgresql://user:password@host:port/database
      
  4. Start the database (if necessary):

    docker-compose up -d db

πŸ—οΈ Usage

πŸƒβ€β™‚οΈ Running the MVP

  1. Start the application:

    docker-compose up
  2. Access the application:

🌐 Hosting

πŸš€ Deployment Instructions

  1. Build the Docker image:

    docker build -t ai-request-handler .
  2. Deploy the container to a cloud platform (e.g., AWS ECS, Google Kubernetes Engine):

    • Configure the cloud platform with necessary resources (e.g., database, load balancer).
    • Create a deployment configuration for the ai-request-handler image.
  3. Configure environment variables (similar to the .env file) on the cloud platform.

  4. Deploy the application.

πŸ”‘ Environment Variables

  • OPENAI_API_KEY: Your OpenAI API key.
  • DATABASE_URL: The connection string to your PostgreSQL database.

πŸ“œ API Documentation

πŸ” Endpoints

  • POST /requests: Sends a request to the OpenAI API.

    • Request Body:
      {
        "prompt": "Write a short story about a dog and a cat",
        "model": "text-davinci-003",
        "temperature": 0.7
      }
    • Response Body:
      {
        "status": "success",
        "response": "Once upon a time, there was a dog named..." 
      }
  • GET /settings: Retrieves user settings.

    • Response Body:
      {
        "api_key": "YOUR_API_KEY",
        "preferred_model": "text-davinci-003" 
      }
  • PUT /settings: Updates user settings.

    • Request Body:
      {
        "api_key": "NEW_API_KEY",
        "preferred_model": "text-curie-001" 
      }

πŸ“œ License & Attribution

πŸ“„ License

This Minimum Viable Product (MVP) is licensed under the GNU AGPLv3 license.

πŸ€– AI-Generated MVP

This MVP was entirely generated using artificial intelligence through CosLynx.com.

No human was directly involved in the coding process of the repository: AI-Powered-Request-Handler-Tool

πŸ“ž Contact

For any questions or concerns regarding this AI-generated MVP, please contact CosLynx at:

🌐 CosLynx.com

Create Your Custom MVP in Minutes With CosLynxAI!