You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: fern/docs/pages/quickstart/quickstart.mdx
+15Lines changed: 15 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -82,6 +82,21 @@ HF_TOKEN=<your_hf_token> docker-compose --profile llamacpp-cpu up
82
82
```
83
83
Replace `<your_hf_token>` with your actual Hugging Face token.
84
84
85
+
#### 2. LlamaCPP CUDA
86
+
87
+
**Description:**
88
+
This profile runs the Private-GPT services locally using `llama-cpp` and Hugging Face models.
89
+
90
+
**Requirements:**
91
+
A **Hugging Face Token (HF_TOKEN)** is required for accessing Hugging Face models. Obtain your token following [this guide](/installation/getting-started/troubleshooting#downloading-gated-and-private-models).
92
+
93
+
**Run:**
94
+
Start the services with your Hugging Face token using pre-built images:
95
+
```sh
96
+
HF_TOKEN=<your_hf_token> docker-compose --profile llamacpp-cuda up
97
+
```
98
+
Replace `<your_hf_token>` with your actual Hugging Face token.
99
+
85
100
## Building Locally
86
101
87
102
If you prefer to build Docker images locally, which is useful when making changes to the codebase or the Dockerfiles, follow these steps:
0 commit comments