NOT ABLE TO INSTALL "llama 3.2 model" #514
Unanswered
PriyeshGit
asked this question in
Q&A
Replies: 1 comment
-
ask this in the ollama github repo, this would have nothing to do with perplexica |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
After installing Ollama on my PC.
When I try to install llama3.2 model.
I always get same error
Welcome to Ollama!
Run your first model:
ollama run llama3.2
PS C:\Windows\System32> ollama run llama3.2
Error: something went wrong, please see the ollama server logs for details
Here is how my server log looks like:
2024/12/10 17:17:36 routes.go:1195: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY:https://proxy.example.com:8080 HTTP_PROXY:http://proxy.example.com:8080 NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Path\To\Models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-12-10T17:17:36.214+09:00 level=INFO source=images.go:753 msg="total blobs: 0"
time=2024-12-10T17:17:36.214+09:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-12-10T17:17:36.218+09:00 level=INFO source=routes.go:1246 msg="Listening on [::]:11434 (version 0.5.1)"
time=2024-12-10T17:17:36.222+09:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[rocm cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
time=2024-12-10T17:17:36.222+09:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs"
time=2024-12-10T17:17:36.223+09:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2024-12-10T17:17:36.223+09:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1
time=2024-12-10T17:17:36.223+09:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=10 threads=14
time=2024-12-10T17:17:36.246+09:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
time=2024-12-10T17:17:36.246+09:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="31.5 GiB" available="12.5 GiB"
Beta Was this translation helpful? Give feedback.
All reactions