Skip to content

Commit

Permalink
readme editted (#42)
Browse files Browse the repository at this point in the history
* readme editted

* resolvedcommit

---------

Co-authored-by: Aram Leblebjian <aramleblebjian@Arams-MacBook-Pro.local>
  • Loading branch information
aramo2000 and Aram Leblebjian authored May 11, 2024
1 parent c30e355 commit 2ba4d99
Showing 1 changed file with 12 additions and 6 deletions.
18 changes: 12 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,14 @@
# LLM Service

Run your LLM service Locally

## how to run it locally
# The Steps to Run the LLM service Locally

* First download gguf from https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF , `llama-2-13b-chat.Q5_K_S.gguf`
* move this model to `model/llama-2-13b-chat.Q5_K_S.gguf`
* move this model to `model/llama-2-13b-chat.Q5_K_S.gguf`
* to locally develop the frontend, you need to:
1. Navigate to the frontend directory: `cd frontend`
2. Install PNPM: `npm install -g pnpm`
3. Install Dependencies: `npm start`
4. start the application in dev mode: `pnpm dev`
5. production build: `pnpm build`

# To run the backend
* run docker containers: docker-compose up -d
* after making sure that the database and the frontend are running, run the LlmServiceApplication

0 comments on commit 2ba4d99

Please sign in to comment.