Skip to content

Commit f821064

Browse files
committed
v0.4.1.0-NetworkVolume
Update Release compatible with Fooocus-API v0.4.1.0 Changelog: - Fixed error returns #27 - RunPod now returns {"delayTime": 0, "error": "message", "executionTime": 0, "id": "runpod-job-id", "status": "FAILED"} on handler (wrong params, images etc.) errors. - Updated containers and software versions - The new NSFW filter is tested and working. Try it with "advanced_params": {"black_out_nsfw": true} See also [Fooocus-API changelog](https://github.com/mrhan1993/Fooocus-API/releases) to find out what's new in the API code and [Fooocus changelog](https://github.com/lllyasviel/Fooocus/releases) to see what's new in the included Fooocus version. Breaking change: - This new release introduced two new models: sdxl_hyper_lora and nsfw-checker (~2GB), so the higher 0,12$/hr CPU pod has to be used for network installation since it's more than 20GB total. Standalone is unaffected. - CUDA version has been updated to 12.1. To prevent unexpected errors, we recommend setting "Allowed CUDA Versions" in your Advanced RunPod endpoint settings to 12.1 and higher
1 parent 0e3179f commit f821064

File tree

7 files changed

+58
-49
lines changed

7 files changed

+58
-49
lines changed

Dockerfile_NetworkEndpoint

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# Base image
2-
FROM runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel-ubuntu22.04
2+
FROM runpod/pytorch:2.2.1-py3.10-cuda12.1.1-devel-ubuntu22.04
33

44
ENV DEBIAN_FRONTEND=noninteractive \
55
PIP_PREFER_BINARY=1 \

Dockerfile_NetworkSetup

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
FROM alpine:3.19.1
1+
FROM alpine:3.20.1
22
RUN apk add --no-cache git curl
33

44
COPY builder/clone.sh /clone.sh

README.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
![github-header](https://github.com/qodeindustries/Quinn-AI/assets/66263283/bf8149b2-cdc3-4a59-96fb-1d272221ef70)
2-
![Static Badge](https://img.shields.io/badge/API_version-0.4.0.6-blue) ![Static Badge](https://img.shields.io/badge/Fooocus_version-2.3.1-blue) ![Static Badge](https://img.shields.io/badge/API_coverage-100%25-vividgreen) ![Static Badge](https://img.shields.io/badge/API_tests-passed-vividgreen)
2+
![Static Badge](https://img.shields.io/badge/API_version-0.4.1.0-blue) ![Static Badge](https://img.shields.io/badge/Fooocus_version-2.4.1-blue) ![Static Badge](https://img.shields.io/badge/API_coverage-100%25-vividgreen) ![Static Badge](https://img.shields.io/badge/API_tests-passed-vividgreen)
33

44
[Fooocus-API](https://github.com/mrhan1993/Fooocus-API) RunPod serverless worker implementation
55
___
6-
Repository consists of two branches:
6+
The repository consists of two branches:
77
[NetworkVolume](https://github.com/davefojtik/RunPod-Fooocus-API/tree/NetworkVolume) and [Standalone](https://github.com/davefojtik/RunPod-Fooocus-API/tree/Standalone)
88

9-
The **NetworkVolume** expects you to install and prepare your own instance on the RunPod network volume, or to use our `3wad/runpod-fooocus-api:0.4.0.6-networksetup` to do so. This is ideal if you want to change models, loras or other contents on the fly, let your users upload them, or persist generated image files right on the server. The downside of this solution is slower starts because everything has to be loaded over the datacenter's network. See [network-guide](https://github.com/davefojtik/RunPod-Fooocus-API/blob/NetworkVolume/docs/network-guide.md) for step-by-step instructions.
9+
The **NetworkVolume** expects you to install and prepare your own instance on the RunPod network volume, or to use our `3wad/runpod-fooocus-api:0.4.1.0-networksetup` to do so. This is ideal if you want to change models, loras or other contents on the fly, let your users upload them, or persist generated image files right on the server. The downside of this solution is slower starts because everything has to be loaded over the data centre's network. See [network-guide](https://github.com/davefojtik/RunPod-Fooocus-API/blob/NetworkVolume/docs/network-guide.md) for step-by-step instructions.
1010

11-
The **Standalone** branch is a ready-to-use docker image with all the files and models already baked and installed into it. You can still customize it to use your own content, but it can't be changed without rebuilding and redeploying the image. This is ideal if you want the fastest, cheapest possible endpoint for long-term usage without the need for frequent changes of its contents. See [standalone-guide](https://github.com/davefojtik/RunPod-Fooocus-API/blob/Standalone/docs/standalone-guide.md) or simply use `3wad/runpod-fooocus-api:0.4.0.6-standalone` as the image for a quick deploy with the default Juggernaut V8 on your RunPod serverless endpoint.
11+
The **Standalone** branch is a ready-to-use docker image with all the files and models already baked and installed into it. You can still customize it to use your own content, but it can't be changed without rebuilding and redeploying the image. This is ideal if you want the fastest, cheapest possible endpoint for long-term usage without the need for frequent changes in its contents. See [standalone-guide](https://github.com/davefojtik/RunPod-Fooocus-API/blob/Standalone/docs/standalone-guide.md) or simply use `3wad/runpod-fooocus-api:0.4.1.0-standalone` as the image for a quick deploy with the default Juggernaut V8 on your RunPod serverless endpoint.
1212

1313
All prebuilt images can be found here: https://hub.docker.com/r/3wad/runpod-fooocus-api
1414

@@ -19,6 +19,7 @@ All prebuilt images can be found here: https://hub.docker.com/r/3wad/runpod-fooo
1919
Feel free to make pull requests, fixes, improvements and suggestions to the code. Any cooperation on keeping this repo up-to-date and free of bugs is highly welcomed.
2020

2121
## Updates
22-
We're not always on the latest version automatically, as there can be breaking changes or major bugs. The updates are being made only after thorough tests by our community of Discord users generating images with the AI agent using this repo as it's tool. And only if we see that the new version performs better and stable.
22+
We're not always on the latest version automatically, as there can be breaking changes or major bugs. The updates are being made only after thorough tests by our community of Discord users generating images with the AI agent using this repo as its tool. And only if we see that the new version performs better and more stable.
2323
___
24-
> *Disclaimer: This repo is in no way affiliated with RunPod Inc. All logos and names are owned by the authors. This is an unofficial community implementation*
24+
> [!NOTE]
25+
> *This repo is in no way affiliated with RunPod Inc. All logos and names are owned by the authors. This is an unofficial community implementation*

builder/requirements.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,13 @@ requests-toolbelt==1.0.0
66
runpod==1.6.2
77

88
# Pulled from https://github.com/konieshadow/Fooocus-API/blob/main/requirements.txt
9-
torchsde==0.2.5
9+
torchsde==0.2.6
1010
einops==0.4.1
1111
transformers==4.30.2
1212
safetensors==0.3.1
1313
accelerate==0.21.0
1414
pyyaml==6.0
15-
Pillow==9.2.0
15+
Pillow==9.4.0
1616
scipy==1.9.3
1717
tqdm==4.64.1
1818
psutil==5.9.5

docs/network-guide.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
## How to prepare Network Volume
22
- [**Create RunPod network volume:**](https://www.runpod.io/console/user/storage)
3-
20GB is just enough for the generic Foocus with Juggernaut and **all** controlnet models (you can save some space by not downloading those you don't plan to use - by modifying the [script](https://github.com/davefojtik/RunPod-Fooocus-API/blob/NetworkVolume/src/networksetup.sh) and making your own setup image). You can increase its size any time if you need additional models, loras etc. But unfortunately, it cannot be reduced back without creating a new one.
4-
- [**Create a custom CPU Pod Template:**](https://www.runpod.io/console/user/templates) use the `3wad/runpod-fooocus-api:0.4.0.6-networksetup` image, 20GB disk size and mount path `/workspace`. *Note: 20GB is max for CPU pods, if you need more space, you'll have to use GPU pod even for the installation*
3+
23GB is just enough for the generic Foocus with Juggernaut and **all** controlnet models (you can save some space by not downloading those you don't plan to use - by modifying the [script](https://github.com/davefojtik/RunPod-Fooocus-API/blob/NetworkVolume/src/networksetup.sh) and making your own setup image). You can increase its size any time if you need additional models, loras etc. But unfortunately, it cannot be reduced back without creating a new one.
4+
- [**Create a custom CPU Pod Template:**](https://www.runpod.io/console/user/templates) use the `3wad/runpod-fooocus-api:0.4.1.0-networksetup` image, 22GB disk size and mount path `/workspace`. *Note: 20GB is max for the cheapest CPU pod, if you need more space, you'll have to use the higher 0,12$/hr tier*
55
- [**Run a CPU pod:**](https://www.runpod.io/console/pods) with **network volume** and runpod-fooocus-api template you've just created. Any CPU pod will do, the installation is just download-intensive. After a while, you should see a "Setup complete!" message in Pod logs. After that, you can terminate the pod and move to the serverless setup steps.
66
---
7-
- Now you can use our premade image: `3wad/runpod-fooocus-api:0.4.0.6-networkendpoint` and skip the next step OR create your custom docker image from this repo that will run on the actual serverless API. Feel free to adjust the code to your needs.
7+
- Now you can use our premade image: `3wad/runpod-fooocus-api:0.4.1.0-networkendpoint` and skip the next step OR create your custom docker image from this repo that will run on the actual serverless API. Feel free to adjust the code to your needs.
88
- *If you built your own image, upload it to the Docker Hub.*
99
- [**Create a custom Serverless Pod Template:**](https://www.runpod.io/console/user/templates) using the Docker Hub image you've just uploaded (or our premade one).
10-
- [**Create a new Serverless API Endpoint:**](https://www.runpod.io/console/serverless) Make sure to choose your (or our) Docker Hub image and not the `3wad/runpod-fooocus-api:0.4.0.6-networksetup` from the step 2. In Advanced settings choose your created network volume.
10+
- [**Create a new Serverless API Endpoint:**](https://www.runpod.io/console/serverless) Make sure to choose your (or our) Docker Hub image and not the `3wad/runpod-fooocus-api:0.4.1.0-networksetup` from the step 2. In Advanced settings choose your created network volume.
1111
- Other settings are your choice, but I personally found that using 4090/L4 GPUs + Flashboot is the most cost-effective one.
1212
- That's it! See the [request_examples](https://github.com/davefojtik/RunPod-Fooocus-API/blob/NetworkVolume/docs/request_examples.js) for how to make requests to this endpoint from your app.

src/handler.py

Lines changed: 12 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -97,8 +97,9 @@ def process_img(value):
9797
else:
9898
input_imgs[key] = process_img(params[key])
9999
except Exception as e:
100-
print("Image conversion task failed: ", e)
101-
return e
100+
error_message = str(e)
101+
print("Image conversion task failed: ", error_message)
102+
return {"error": error_message}
102103

103104
''' ----------------------------
104105
Send requests to the Fooocus-API
@@ -131,8 +132,9 @@ def process_img(value):
131132
headers=headers,
132133
timeout=config["timeout"])
133134
except Exception as e:
134-
print("multipart/form-data task failed: ", e)
135-
return e
135+
error_message = str(e)
136+
print("multipart/form-data task failed: ", error_message)
137+
return {"error": error_message}
136138
else: # If the final request should be application/json. Send the original request data
137139
# Convert the processed binary image back to url-safe-base64
138140
for key, value in input_imgs.items():
@@ -170,8 +172,9 @@ def preview_stream(jsn, event):
170172
return
171173
time.sleep(int(event["input"].get('preview_interval', 1)))
172174
except Exception as e:
173-
print("async preview task failed: ", e)
174-
return e
175+
error_message = str(e)
176+
print("async preview task failed: ", error_message)
177+
return {"error": error_message}
175178

176179
def clearOutput():
177180
try:
@@ -181,8 +184,9 @@ def clearOutput():
181184
shutil.rmtree('/workspace/repositories/Fooocus/outputs')
182185
os.makedirs('/workspace/repositories/Fooocus/outputs')
183186
except Exception as e:
184-
print("clear outputs task failed: ", e)
185-
return e
187+
error_message = str(e)
188+
print("clear outputs task failed: ", error_message)
189+
return {"error": error_message}
186190

187191
def inpaint_preset(params):
188192
option = params.get("inpaint_preset")

0 commit comments

Comments
 (0)