Skip to content

Commit

Permalink
Fix readme typo (#12)
Browse files Browse the repository at this point in the history
  • Loading branch information
erjanmx authored Aug 8, 2023
1 parent a275f9c commit 2075cd3
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ cmake --build . --config Release

#### Option 1: Download pre-quantized MiniGPT4 model

Pre-quantized models are avaliable on Hugging Face ~ [7B](https://huggingface.co/datasets/maknee/minigpt4-7b-ggml/tree/main) or [13B](https://huggingface.co/datasets/maknee/minigpt4-13b-ggml/tree/main).
Pre-quantized models are available on Hugging Face ~ [7B](https://huggingface.co/datasets/maknee/minigpt4-7b-ggml/tree/main) or [13B](https://huggingface.co/datasets/maknee/minigpt4-13b-ggml/tree/main).

Recommended for reliable results, but slow inference speed: [minigpt4-13B-f16.bin](https://huggingface.co/datasets/maknee/minigpt4-13b-ggml/blob/main/minigpt4-13B-f16.bin)

Expand Down Expand Up @@ -129,7 +129,7 @@ python convert.py ~/Downloads/pretrained_minigpt4.pth --outtype f16

#### Option 1: Download pre-quantized vicuna-v0 model

Pre-quantized models are avaliable on [Hugging Face](https://huggingface.co/datasets/maknee/ggml-vicuna-v0-quantized/tree/main)
Pre-quantized models are available on [Hugging Face](https://huggingface.co/datasets/maknee/ggml-vicuna-v0-quantized/tree/main)

Recommended for reliable results and decent inference speed: [ggml-vicuna-13B-v0-q5_k.bin](https://huggingface.co/datasets/maknee/ggml-vicuna-v0-quantized/blob/main/ggml-vicuna-13B-v0-q5_k.bin)

Expand Down

0 comments on commit 2075cd3

Please sign in to comment.