diff --git a/README.md b/README.md index be1d1cb..358f5fb 100644 --- a/README.md +++ b/README.md @@ -76,7 +76,7 @@ python scripts/download_weights.py Daifuku can serve models individually or combine them behind a single endpoint:
-Mochi-Only Server +Mochi Server ```bash python api/mochi_serve.py @@ -85,7 +85,7 @@ python api/mochi_serve.py
-LTX-Only Server +LTX Server ```bash python api/ltx_serve.py @@ -93,13 +93,23 @@ python api/ltx_serve.py ```
+
+ Allegro Server + +```bash +python api/ltx_serve.py +# Endpoint: http://127.0.0.1:8000/api/v1/video/ltx +``` +
+ +
Combined Server ```bash python api/serve.py # Endpoint: http://127.0.0.1:8000/predict -# Must supply "model_name": "mochi" or "model_name": "ltx" in the request payload. +# Must supply "model_name" in the request payload. ```