OpenAI-compatible API for image editing using Qwen-Image-Edit model.
- OpenAI-compatible endpoints for seamless integration with OpenWebUI/OpenRouter
- GPU-accelerated image editing using Qwen-Image-Edit
- Simple REST API with multipart form data support
- Docker support with CUDA
- NVIDIA GPU with CUDA support
- Python 3.10+
- 16GB+ GPU memory recommended
- Install dependencies:
pip install -r requirements.txt
pip install git+https://github.com/huggingface/diffusers
- Run the server:
python run.py
The API will be available at http://localhost:8000
Note: On first run, the model (~15GB) will be downloaded from Hugging Face.
- Build and run with Docker Compose:
docker-compose up -d
The API will be accessible at http://localhost:200
Note: On first run, the model (~15GB) will be downloaded to ./model-cache/
directory. Subsequent restarts will use the cached model.
The image is automatically built and published to GitHub Container Registry when code is pushed.
- Deploy directly to Kubernetes:
kubectl apply -f k8s-deployment.yaml
- Login to GitHub Container Registry:
echo $GITHUB_TOKEN | docker login ghcr.io -u YOUR_GITHUB_USERNAME --password-stdin
- Build and push the image:
docker build -t ghcr.io/aihpi/recreate-goods-edit-api:latest .
docker push ghcr.io/aihpi/recreate-goods-edit-api:latest
- Deploy to Kubernetes:
kubectl apply -f k8s-deployment.yaml
The service will be accessible on port 200 within the cluster (ClusterIP). Use kubectl port-forward or an Ingress for external access.
Note: The init container will download the model to a persistent volume on first deployment. The model is shared across pod restarts.
POST /v1/images/edits
Send a multipart form with:
image
: Image file to editprompt
: Edit instruction (e.g., "Change the background to sunset")model
: (optional) Model ID, defaults to "qwen-image-edit"
Returns base64-encoded edited image in OpenAI format.
GET /v1/models
Returns available models.
GET /v1/health
Returns server status and model loading state.
- Start the API server
- In OpenWebUI settings, add custom OpenAI API endpoint:
- URL:
http://localhost:200/v1
(Docker) orhttp://localhost:8000/v1
(Local) - Model:
qwen-image-edit
- URL:
# Docker/Kubernetes (port 200)
curl -X POST "http://localhost:200/v1/images/edits" \
-H "Content-Type: multipart/form-data" \
-F "image=@input.jpg" \
-F "prompt=Make the sky purple" \
-F "model=qwen-image-edit"
Create a .env
file if you want to change the default config (see .env.example
):
HOST
: Server host (default: 0.0.0.0)PORT
: Server port (default: 8000)DEVICE
: cuda or cpu (auto-detected)
Qwen-Image-Edit supports:
- Semantic image editing
- Text rendering and modification
- Style transfer
- Object manipulation
- Background changes
Reduce image size or ensure you have 16GB+ GPU memory.
Ensure you have latest diffusers:
pip install git+https://github.com/huggingface/diffusers
Check NVIDIA drivers and PyTorch CUDA installation:
import torch
print(torch.cuda.is_available())