From c3b18736c24f44cde46088dac490729a38ce767f Mon Sep 17 00:00:00 2001 From: Vikramjeet Singh <72499426+VikramxD@users.noreply.github.com> Date: Wed, 8 Jan 2025 13:36:14 +0530 Subject: [PATCH] Update README.md --- README.md | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index be1d1cb..358f5fb 100644 --- a/README.md +++ b/README.md @@ -76,7 +76,7 @@ python scripts/download_weights.py Daifuku can serve models individually or combine them behind a single endpoint:
-Mochi-Only Server +Mochi Server ```bash python api/mochi_serve.py @@ -85,7 +85,7 @@ python api/mochi_serve.py
-LTX-Only Server +LTX Server ```bash python api/ltx_serve.py @@ -93,13 +93,23 @@ python api/ltx_serve.py ```
+
+ Allegro Server + +```bash +python api/ltx_serve.py +# Endpoint: http://127.0.0.1:8000/api/v1/video/ltx +``` +
+ +
Combined Server ```bash python api/serve.py # Endpoint: http://127.0.0.1:8000/predict -# Must supply "model_name": "mochi" or "model_name": "ltx" in the request payload. +# Must supply "model_name" in the request payload. ```