From 234f2ac2d39066fb3e356f274edf0d15dc5abc03 Mon Sep 17 00:00:00 2001 From: caprizone6 Date: Wed, 14 Aug 2024 19:19:22 -0500 Subject: [PATCH] udpate readme with llama3 --- installation/linux/README.md | 5 ++--- installation/macOS/README.md | 5 ++--- installation/windows/README.md | 5 ++--- 3 files changed, 6 insertions(+), 9 deletions(-) diff --git a/installation/linux/README.md b/installation/linux/README.md index e697750..601d555 100644 --- a/installation/linux/README.md +++ b/installation/linux/README.md @@ -63,9 +63,9 @@ We will install these two tools in the following steps: You can check if Ollama is running by visiting http://localhost:11434/ in your default browser. > [!CAUTION] -> After installing Ollama, close any open Terminal/Command Prompt before you pull Llama2. +> After installing Ollama, close any open Terminal/Command Prompt before you pull Llama3. -Once you start Ollama, you have to pull Lllama2 model by running following command: +Once you start Ollama, you have to pull Llama3 model by running following command: ``` ollama pull llama3 @@ -85,7 +85,6 @@ The output should look like this: Total: 9.975889s ``` -If you get error, you can check if Ollama is running by visiting http://localhost:11434/ in your default browser. If it's taking more than a minute to run, your laptop might not have enough resources to run Ollama. ## MyGPT installation diff --git a/installation/macOS/README.md b/installation/macOS/README.md index 47527fc..e326340 100644 --- a/installation/macOS/README.md +++ b/installation/macOS/README.md @@ -63,9 +63,9 @@ We will install these required tools in the following steps: You can check if Ollama is running by visiting http://localhost:11434/ in your default browser. > [!CAUTION] -> After installing Ollama, close any open Terminal/Command Prompt before you pull Llama2. +> After installing Ollama, close any open Terminal/Command Prompt before you pull Llama3. -Once you start Ollama, you have to pull Lllama2 model by running following command: +Once you start Ollama, you have to pull Llama3 model by running following command: ``` ollama pull llama3 @@ -85,7 +85,6 @@ The output should look like this: Total: 9.975889s ``` -If you get error, you can check if Ollama is running by visiting http://localhost:11434/ in your default browser. If it's taking more than a minute to run, your laptop might not have enough resources to run Ollama. ## MyGPT installation diff --git a/installation/windows/README.md b/installation/windows/README.md index 61ea776..3c34733 100644 --- a/installation/windows/README.md +++ b/installation/windows/README.md @@ -78,9 +78,9 @@ We will install these two tools in the following steps: You can check if Ollama is running by visiting http://localhost:11434/ in your default browser. > [!CAUTION] -> After installing Ollama, close any open Terminal/Command Prompt before you pull Llama2. +> After installing Ollama, close any open Terminal/Command Prompt before you pull Llama3. -Once you start Ollama, you have to pull Lllama2 model by running following command: +Once you start Ollama, you have to pull Llama3 model by running following command: ``` ollama pull llama3 @@ -100,7 +100,6 @@ The output should look like this: Total: 9.975889s ``` -If you get error, you can check if Ollama is running by visiting http://localhost:11434/ in your default browser. If it's taking more than a minute to run, your laptop might not have enough resources to run Ollama. ## MyGPT installation