-
Notifications
You must be signed in to change notification settings - Fork 977
Issues: Mozilla-Ocho/llamafile
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Bug: Segmentation fault re-running after installing NVIDIA CUDA.
bug
medium severity
#560
opened Sep 5, 2024 by
4kbyte
Feature Request: Add support for Raspberry Pi Ai Kit
request to lend support
#548
opened Aug 20, 2024 by
beingminimal
4 tasks done
Bug: malloc: *** error for object 0x600003310600: pointer being freed was not allocated
#537
opened Aug 13, 2024 by
groovecoder
Bug: The token generation speed is slower compared to the upstream llama.cpp project
bug
medium severity
#533
opened Aug 13, 2024 by
BIGPPWONG
Bug: unknown argument: --threads‐batch‐draft
bug
medium severity
#532
opened Aug 9, 2024 by
moisestohias
Bug: llama 3.1 and variants fail with error "wrong number of tensors; expected 292, got 291"
bug
high severity
#516
opened Jul 30, 2024 by
camAtGitHub
Feature Request: Support for microsoft/Phi-3-vision-128k-instruct
enhancement
#515
opened Jul 30, 2024 by
azhuvath
4 tasks done
Bug: Unable to load Mixtral-8x7B-Instruct-v0.1-GGUF on Amazon Linux with AMD EPYC 7R13
bug
critical severity
#512
opened Jul 28, 2024 by
rpchastain
UX Request: Update readme to mention
llamafile -m foo.llamafile
as an option
enhancement
#511
opened Jul 27, 2024 by
mofosyne
4 tasks done
Feature Request: Add support for GLM4-9B and related models
enhancement
#509
opened Jul 26, 2024 by
VarLad
4 tasks done
Bug: low CPU usage on AWS Graviton4 compared to ollama
bug
low severity
#503
opened Jul 24, 2024 by
nlothian
Bug: unknown pre-tokenizer type: ''mistral-bpe" when running the new Mistral-Nemo model
enhancement
medium severity
#493
opened Jul 19, 2024 by
wingenlit
Bug: --gpu option cannot work on win10, not friendly to WIN.
awaiting response
#487
opened Jul 6, 2024 by
liuye1992
Hugging face repository does not show the version of the llamafile you are downloading
upstream bug
#459
opened May 29, 2024 by
norteo
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.