-
Notifications
You must be signed in to change notification settings - Fork 10k
Issues: ggerganov/llama.cpp
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
examples : add configuration presets
documentation
Improvements or additions to documentation
enhancement
New feature or request
examples
good first issue
Good for newcomers
help wanted
Extra attention is needed
#10932
opened Dec 21, 2024 by
ggerganov
1 of 6 tasks
Misc. bug: All llama executables exit immediately without console output
bug-unconfirmed
#10929
opened Dec 21, 2024 by
Ikaron
OpenAI compatible response for /models has empty id and empty name
bug-unconfirmed
#10924
opened Dec 20, 2024 by
JeroenAdam
Compile bug: macOS Vulkan build fails
bug
Something isn't working
build
Compilation issues
Vulkan
Issues specific to the Vulkan backend
#10923
opened Dec 20, 2024 by
soerenkampschroer
Compile bug: iOS version able to build not not able to run
bug-unconfirmed
#10922
opened Dec 20, 2024 by
Animaxx
Eval bug: gte-Qwen2 produces non-homogenous embedding vectors
bug-unconfirmed
#10921
opened Dec 20, 2024 by
bringfido-adams
Misc. bug: llama-server throws "Unsupported param: tools"
bug-unconfirmed
#10920
opened Dec 20, 2024 by
hsm207
Feature Request: (Server UI) Use New feature or request
good first issue
Good for newcomers
server/webui
remark
for markdown rendering
enhancement
#10915
opened Dec 20, 2024 by
ngxson
4 tasks done
Extra newline and other tokens being produced in between paragraphs - llama-server
bug-unconfirmed
#10914
opened Dec 20, 2024 by
DocShotgun
Issues with LLaMA Binaries – Incomplete or Corrupted Downloads
#10908
opened Dec 20, 2024 by
gitterbug131234
Feature Request: Multiple prompts for prompt caching
enhancement
New feature or request
#10904
opened Dec 19, 2024 by
firelex
4 tasks done
Feature Request: SIMD on s390x using Vector Facility (-mzvector)
enhancement
New feature or request
#10888
opened Dec 18, 2024 by
taronaeo
4 tasks done
Feature Request: support New feature or request
good first issue
Good for newcomers
server/api
server
"encoding_format": "base64"
in the */embeddings
endpoints
enhancement
#10887
opened Dec 18, 2024 by
ggerganov
4 tasks done
Feature Request: Add support for SmolVLM
enhancement
New feature or request
#10877
opened Dec 17, 2024 by
iacore
4 tasks done
Misc. bug: [SERVER] Multiple slots, generation speed is degraded after each generation/slot used
bug-unconfirmed
#10860
opened Dec 17, 2024 by
ExtReMLapin
Misc. bug: llama-bench SEGFAULTS w/ SYCL/HIP backend, however llama-cli seems to work
bug-unconfirmed
#10850
opened Dec 16, 2024 by
lhl
Compile bug: Compiling on Maxwell architecture 52 cuda12.7
bug-unconfirmed
#10849
opened Dec 16, 2024 by
envolution
Feature Request: Q6_0 quant
enhancement
New feature or request
#10848
opened Dec 16, 2024 by
Nexesenex
4 tasks done
Eval bug: ggml_metal_encode_node: error: unsupported op 'IM2COL'
bug-unconfirmed
#10845
opened Dec 16, 2024 by
beginor
Eval bug: Qwen2-VL Hallucinates image content on Vulkan backend
bug-unconfirmed
#10843
opened Dec 15, 2024 by
stduhpf
Feature Request: Add support for the WePOINTS/POINTS1.5 model
enhancement
New feature or request
#10834
opened Dec 15, 2024 by
rcbevans
4 tasks done
Previous Next
ProTip!
Add no:assignee to see everything that’s not assigned.