Serve Openai GPT models locally for integration with frontend development
- Register a Openai developer account and get the
API_KEY
- Use given sample Chat Completion function to generate answers for your prompts
from openai import OpenAI client = OpenAI() def generate_answer(prompt): response = client.chat.completions.create( model="GPT_MODEL", messages=[ {"role": "user", "content": prompt} ] ) return response.choices[0].message.content
- Create your own config file in JSON format using the
configs_sample.json
file - Set up your python env
python3 -m venv YOUR_ENV_NAME
- Activate the env in Mac OS and Ubuntu
source YOUR_ENV_NAME/bin/activate
pip
install required librariespip isntall -r requirements.txt
- Run the cmd in the terminal
uvicorn serve:app --host 0.0.0.0 --port YOUR_DESIGNATED_PORT