Skip to content

biomchen/openai-gpt-serving-api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenAI GPT Model Serving API

Serve Openai GPT models locally for integration with frontend development

fig 1 fig 2

Requirement

  1. Register a Openai developer account and get the API_KEY
  2. Use given sample Chat Completion function to generate answers for your prompts
    from openai import OpenAI
    
    client = OpenAI()
    
    def generate_answer(prompt):
        response = client.chat.completions.create(
            model="GPT_MODEL",
            messages=[
                {"role": "user", "content": prompt}
            ]
        )
        return response.choices[0].message.content

Deployment

  1. Create your own config file in JSON format using the configs_sample.json file
  2. Set up your python env
    python3 -m venv YOUR_ENV_NAME
  3. Activate the env in Mac OS and Ubuntu
    source YOUR_ENV_NAME/bin/activate
  4. pip install required libraries
    pip isntall -r requirements.txt
  5. Run the cmd in the terminal
    uvicorn serve:app --host 0.0.0.0 --port YOUR_DESIGNATED_PORT

About

Serve locally for integration with frontend development

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages