From f8f10ed27c9d8ac86e9e4420bd1617117800b68f Mon Sep 17 00:00:00 2001 From: Justin Hayes Date: Sun, 25 Feb 2024 18:06:36 -0500 Subject: [PATCH 1/3] Add tutorials --- docs/tutorial/index.md | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/docs/tutorial/index.md b/docs/tutorial/index.md index a94a836d..38157466 100644 --- a/docs/tutorial/index.md +++ b/docs/tutorial/index.md @@ -10,3 +10,43 @@ title: "📝 Tutorial" # **Seeking Contributors!** ::: + +## How to Add a New Model to LiteLLM + +LiteLLM supports a variety of APIs, both OpenAI-compatible and others. To integrate a new API model, follow these instructions: + +1. Go to the Settings > Models > LiteLLM model management interface. +2. In 'Simple' mode, you will only see the option to enter a **Model**. +3. For additional configuration options, click on the 'Simple' toggle to switch to 'Advanced' mode. Here you can enter: + - **Model Name**: The name of the model as you want it to appear in the models list. + - **API Base URL**: The base URL for your API provider. This field can usually be left blank unless your provider specifies a custom endpoint URL. + - **API Key**: Your unique API key. Replace with the key provided by your API provider. + - **API RPM**: The allowed requests per minute for your API. Replace with the appropriate value for your API plan. + +4. After entering all the required information, click the '+' button to add the new model to LiteLLM. + +For more information on the specific providers and advanced settings, consult the [LiteLLM Providers Documentation](https://litellm.vercel.app/docs/providers). + +## Image Generation with AUTOMATIC1111 API + +Open WebUI now supports image generation through the AUTOMATIC1111 API. To set this up, follow these steps: + +### Initial Setup + +1. Ensure that you have AUTOMATIC1111 installed. +2. Launch the WebUI with additional flags to enable API access: + ``` + ./webui.sh --api --listen + ``` + For Docker installations, use the `--listen` flag to allow connections outside of localhost. + +### Configuring Open WebUI + +1. In Open WebUI, navigate to Settings > Images. +2. In the API URL field, enter the address where AUTOMATIC1111's API is accessible, following this format: + ``` + http://:7860 + ``` + If you're running a Docker installation of Open WebUI and AUTOMATIC1111 on the same host, replace `` with `host.docker.internal`. + +Please note that, as of now, only the AUTOMATIC1111 API is supported for image generation within Open WebUI. \ No newline at end of file From 63c0b0f7b65e0aa0c6913bc2d6e180f196d75e50 Mon Sep 17 00:00:00 2001 From: Justin Hayes Date: Sun, 25 Feb 2024 18:11:10 -0500 Subject: [PATCH 2/3] Reorganize --- docs/tutorial/images.md | 23 +++++++++++++++++++++++ docs/tutorial/index.md | 40 ---------------------------------------- docs/tutorial/litellm.md | 15 +++++++++++++++ 3 files changed, 38 insertions(+), 40 deletions(-) create mode 100644 docs/tutorial/images.md create mode 100644 docs/tutorial/litellm.md diff --git a/docs/tutorial/images.md b/docs/tutorial/images.md new file mode 100644 index 00000000..a5059c39 --- /dev/null +++ b/docs/tutorial/images.md @@ -0,0 +1,23 @@ +## Image Generation + +Open WebUI now supports image generation through the AUTOMATIC1111 API. To set this up, follow these steps: + +### Initial Setup + +1. Ensure that you have AUTOMATIC1111 installed. +2. Launch the WebUI with additional flags to enable API access: + ``` + ./webui.sh --api --listen + ``` + For Docker installations, use the `--listen` flag to allow connections outside of localhost. + +### Configuring Open WebUI + +1. In Open WebUI, navigate to Settings > Images. +2. In the API URL field, enter the address where AUTOMATIC1111's API is accessible, following this format: + ``` + http://:7860 + ``` + If you're running a Docker installation of Open WebUI and AUTOMATIC1111 on the same host, replace `` with `host.docker.internal`. + +Please note that, as of now, only the AUTOMATIC1111 API is supported for image generation within Open WebUI. \ No newline at end of file diff --git a/docs/tutorial/index.md b/docs/tutorial/index.md index 38157466..a94a836d 100644 --- a/docs/tutorial/index.md +++ b/docs/tutorial/index.md @@ -10,43 +10,3 @@ title: "📝 Tutorial" # **Seeking Contributors!** ::: - -## How to Add a New Model to LiteLLM - -LiteLLM supports a variety of APIs, both OpenAI-compatible and others. To integrate a new API model, follow these instructions: - -1. Go to the Settings > Models > LiteLLM model management interface. -2. In 'Simple' mode, you will only see the option to enter a **Model**. -3. For additional configuration options, click on the 'Simple' toggle to switch to 'Advanced' mode. Here you can enter: - - **Model Name**: The name of the model as you want it to appear in the models list. - - **API Base URL**: The base URL for your API provider. This field can usually be left blank unless your provider specifies a custom endpoint URL. - - **API Key**: Your unique API key. Replace with the key provided by your API provider. - - **API RPM**: The allowed requests per minute for your API. Replace with the appropriate value for your API plan. - -4. After entering all the required information, click the '+' button to add the new model to LiteLLM. - -For more information on the specific providers and advanced settings, consult the [LiteLLM Providers Documentation](https://litellm.vercel.app/docs/providers). - -## Image Generation with AUTOMATIC1111 API - -Open WebUI now supports image generation through the AUTOMATIC1111 API. To set this up, follow these steps: - -### Initial Setup - -1. Ensure that you have AUTOMATIC1111 installed. -2. Launch the WebUI with additional flags to enable API access: - ``` - ./webui.sh --api --listen - ``` - For Docker installations, use the `--listen` flag to allow connections outside of localhost. - -### Configuring Open WebUI - -1. In Open WebUI, navigate to Settings > Images. -2. In the API URL field, enter the address where AUTOMATIC1111's API is accessible, following this format: - ``` - http://:7860 - ``` - If you're running a Docker installation of Open WebUI and AUTOMATIC1111 on the same host, replace `` with `host.docker.internal`. - -Please note that, as of now, only the AUTOMATIC1111 API is supported for image generation within Open WebUI. \ No newline at end of file diff --git a/docs/tutorial/litellm.md b/docs/tutorial/litellm.md new file mode 100644 index 00000000..5c2d6372 --- /dev/null +++ b/docs/tutorial/litellm.md @@ -0,0 +1,15 @@ +# LiteLLM Config + +LiteLLM supports a variety of APIs, both OpenAI-compatible and others. To integrate a new API model, follow these instructions: + +1. Go to the Settings > Models > LiteLLM model management interface. +2. In 'Simple' mode, you will only see the option to enter a **Model**. +3. For additional configuration options, click on the 'Simple' toggle to switch to 'Advanced' mode. Here you can enter: + - **Model Name**: The name of the model as you want it to appear in the models list. + - **API Base URL**: The base URL for your API provider. This field can usually be left blank unless your provider specifies a custom endpoint URL. + - **API Key**: Your unique API key. Replace with the key provided by your API provider. + - **API RPM**: The allowed requests per minute for your API. Replace with the appropriate value for your API plan. + +4. After entering all the required information, click the '+' button to add the new model to LiteLLM. + +For more information on the specific providers and advanced settings, consult the [LiteLLM Providers Documentation](https://litellm.vercel.app/docs/providers). \ No newline at end of file From d1d89b180801f08c132f35be968e015f3865529f Mon Sep 17 00:00:00 2001 From: Justin Hayes Date: Sun, 25 Feb 2024 18:24:57 -0500 Subject: [PATCH 3/3] Docker `localhost` FAQ --- docs/faq.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/docs/faq.md b/docs/faq.md index 0c8f4a24..00274aeb 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -9,4 +9,12 @@ title: "📋 FAQ" **A:** We require you to sign up to become the admin user for enhanced security. This ensures that if the Open WebUI is ever exposed to external access, your data remains secure. It's important to note that everything is kept local. We do not collect your data. When you sign up, all information stays within your server and never leaves your device. Your privacy and security are our top priorities, ensuring that your data remains under your control at all times. +**Q: Why can't my Docker container connect to services on the host using `localhost`?** + +**A:** Inside a Docker container, `localhost` refers to the container itself, not the host machine. This distinction is crucial for networking. To establish a connection from your container to services running on the host, you should use the DNS name `host.docker.internal` instead of `localhost`. This DNS name is specially recognized by Docker to facilitate such connections, effectively treating the host as a reachable entity from within the container, thus bypassing the usual `localhost` scope limitation. + +**Q: How do I make my host's services accessible to Docker containers?** + +**A:** To make services running on the host accessible to Docker containers, configure these services to listen on all network interfaces, using the IP address `0.0.0.0`, instead of `127.0.0.1` which is limited to localhost only. This configuration allows the services to accept connections from any IP address, including Docker containers. It's important to be aware of the security implications of this setup, especially when operating in environments with potential external access. Implementing appropriate security measures, such as firewalls and authentication, can help mitigate risks. + If you have any further questions or concerns, please don't hesitate to reach out! 🛡️