Skip to content

Latest commit

 

History

History
150 lines (105 loc) · 3.7 KB

README_EN.md

File metadata and controls

150 lines (105 loc) · 3.7 KB

中文文档

⬆️Intended function

  1. One-stop packaging of deep learning images
  • Simply write configuration files and inference files
  • No worries about installing mainstream machine learning libraries such as PyTorch, TensorFlow, Transformers, etc.
  • No worries about GPU environment configuration such as CUDA, cudnn, etc.
  • The image runtime will run a high-performance API service for easy invocation.
  1. Push images for cloud hosting
  2. Generate local debugging GUI interface and API documentation.

📚Requirements

  • Paython3.7+
  • Docker. SwanAPI uses Docker to create containers for your models, and before you can run SwanAPI, you need toInstall Docker

🔧Install

pip install swanapi -i https://pypi.org/simple

🚀How it works

Define the Docker environment your model runs in with swan.yaml:

build:
  gpu: false
  system_packages:
    - "libgl1-mesa-glx"
    - "libglib2.0-0"
  python_version: "3.10"
  python_packages:
    - "numpy"
    - "onnxruntime"
    - "opencv-python"
predict:
  port: 8000

If you are in China, you can add python_source: "cn" under build to use Tsinghua mirror to install python_packages.

In predict.py, define how your model performs inference. We use a code style similar to Gradio.

import cv2
from swanapi import SwanInference

def predict(image):
    result_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    return result_image

if __name__ == "__main__":
    api = SwanInference()
    api.inference(predict,
                  inputs=['image'],
                  outputs=['image'],
                  description="a simple test")
    api.launch()
  • 2023.7.8: v0.1.3 supports three types: text, image, and number

💻Server

Now you can directly turn the model into a prediction service.

$ python predict.py
 * Serving Flask app "app" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on http://0.0.0.0:8000/ (Press CTRL+C to quit)

You can also build a deep learning inference image with just one command.

$ swanapi build -t my-dl-model
--> Building Docker Image...
--> Building Docker Finish.
 * Serving Flask app "app" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on http://0.0.0.0:8000/ (Press CTRL+C to quit)

Request your model inference service (taking image-to-image task as an example):

  • Python
import requests
url = "http://127.0.0.1:8000/predictions/"
files = [('image', ('test.jpg', open('./test.jpg', 'rb'), 'image/jpeg'))]
response = requests.request("POST", url, files=files)
print(response.text)
  • cURL
curl --location 'http://127.0.0.1:8000/predictions/' \
--form 'image=@"./test.jpf"'

The obtained JSON will be: {"content":"base64"}

Once you decode the base64, you will obtain the image file.

import base64
import numpy as np
import cv2

image_base64 = response.text['content']
nparr = np.fromstring(base64.b64decode(image_base64), np.uint8)
img_restore = cv2.imdecode(nparr, cv2.IMREAD_COLOR)