- π Overview
- π¦ Features
- π Structure
- π» Installation
- ποΈ Usage
- π Hosting
- π License
- π Authors
This repository contains the backend code for the AI Powered Request Handler Tool, a Python service designed to act as a user-friendly intermediary between developers and OpenAI's API. It simplifies complex AI interactions, making advanced language processing accessible to a wider audience. This MVP addresses the growing need for user-friendly AI integration, empowering developers and users alike with a powerful, yet intuitive interface for leveraging OpenAI's capabilities.
The tool's core value proposition lies in its ability to streamline the process of sending requests to OpenAI's API and receiving processed responses. This eliminates the complexities of direct API interactions and allows users to focus on the core functionalities of their applications.
Feature | Description | |
---|---|---|
βοΈ | Architecture | The MVP utilizes a serverless architecture with a Python backend deployed on a cloud platform (e.g., AWS Lambda, Azure Functions) triggered by user requests through either a command-line interface or API calls. |
π | Documentation | This README file provides a comprehensive overview of the MVP, its dependencies, and usage instructions. |
π | Dependencies | The codebase relies on various external libraries and packages such as FastAPI , uvicorn , pydantic , psycopg2-binary , python-dotenv , openai , sqlalchemy , requests , pytest , docker , docker-compose , prometheus_client , gunicorn , and sentry-sdk . |
𧩠| Modularity | The code is organized into modules for different functionalities (e.g., routers, models, schemas, services, utils, tests), promoting reusability and maintainability. |
π§ͺ | Testing | Unit tests are included for key modules like openai_service and db_service to ensure functionality and correctness. |
β‘οΈ | Performance | The backend is optimized for efficient request processing and response handling, including techniques like caching frequently used API calls and minimizing unnecessary API requests. |
π | Security | Robust authentication and authorization measures are implemented to protect API keys and user data. Secure communication protocols (HTTPS) are used for all API interactions. |
π | Version Control | Utilizes Git for version control with a startup.sh script for containerized deployment. |
π | Integrations | The MVP seamlessly integrates with OpenAI's API, PostgreSQL database, and utilizes various Python libraries for HTTP requests, JSON handling, and logging. |
πΆ | Scalability | The architecture is designed to handle increasing user load and evolving OpenAI API features. Considerations for load balancing, horizontal scaling, and efficient database management are implemented. |
βββ main.py # Main application entry point
βββ routers
β βββ requests.py # API endpoint for handling user requests
β βββ settings.py # API endpoint for managing user settings
βββ models
β βββ request.py # Database model for user requests
β βββ settings.py # Database model for user settings
βββ schemas
β βββ request_schema.py # Pydantic schema for validating user requests
β βββ settings_schema.py # Pydantic schema for validating user settings
βββ services
β βββ openai_service.py # Service for interacting with the OpenAI API
β βββ db_service.py # Service for interacting with the database
βββ utils
β βββ logger.py # Logging utility for the application
β βββ exceptions.py # Custom exception classes for error handling
β βββ config.py # Configuration utility for loading environment variables
βββ tests
βββ unit
βββ test_openai_service.py # Unit tests for the openai_service module
βββ test_db_service.py # Unit tests for the db_service module
- Python 3.9+
- PostgreSQL 15+
- Docker 5.0.0+
-
Clone the repository:
git clone https://github.com/coslynx/AI-Powered-Request-Handler-Tool.git cd AI-Powered-Request-Handler-Tool
-
Install dependencies:
pip install -r requirements.txt
-
Set up environment variables:
- Create a
.env
file in the project root. - Add the following environment variables:
OPENAI_API_KEY=YOUR_API_KEY DATABASE_URL=postgresql://user:password@host:port/database
- Create a
-
Start the database (if necessary):
docker-compose up -d db
-
Start the application:
docker-compose up
-
Access the application:
- API endpoint: http://localhost:8000/docs
-
Build the Docker image:
docker build -t ai-request-handler .
-
Deploy the container to a cloud platform (e.g., AWS ECS, Google Kubernetes Engine):
- Configure the cloud platform with necessary resources (e.g., database, load balancer).
- Create a deployment configuration for the
ai-request-handler
image.
-
Configure environment variables (similar to the
.env
file) on the cloud platform. -
Deploy the application.
OPENAI_API_KEY
: Your OpenAI API key.DATABASE_URL
: The connection string to your PostgreSQL database.
-
POST
/requests
: Sends a request to the OpenAI API.- Request Body:
{ "prompt": "Write a short story about a dog and a cat", "model": "text-davinci-003", "temperature": 0.7 }
- Response Body:
{ "status": "success", "response": "Once upon a time, there was a dog named..." }
- Request Body:
-
GET
/settings
: Retrieves user settings.- Response Body:
{ "api_key": "YOUR_API_KEY", "preferred_model": "text-davinci-003" }
- Response Body:
-
PUT
/settings
: Updates user settings.- Request Body:
{ "api_key": "NEW_API_KEY", "preferred_model": "text-curie-001" }
- Request Body:
This Minimum Viable Product (MVP) is licensed under the GNU AGPLv3 license.
This MVP was entirely generated using artificial intelligence through CosLynx.com.
No human was directly involved in the coding process of the repository: AI-Powered-Request-Handler-Tool
For any questions or concerns regarding this AI-generated MVP, please contact CosLynx at:
- Website: CosLynx.com
- Twitter: @CosLynxAI
Create Your Custom MVP in Minutes With CosLynxAI!