Skip to content

Actions: ggerganov/llama.cpp

CI

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
11,692 workflow runs
11,692 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Revert "llama : add Falcon3 support (#10864)" (#10876)
CI #17764: Commit 4da69d1 pushed by slaren
December 18, 2024 00:36 39m 14s master
December 18, 2024 00:36 39m 14s
Revert "Add Falcon3 model support"
CI #17763: Pull request #10876 opened by slaren
December 17, 2024 22:31 46m 28s revert-10864-falcon3_integration
December 17, 2024 22:31 46m 28s
Use model->gguf_kv for loading the template instead of using the C AP…
CI #17762: Commit d62b532 pushed by slaren
December 17, 2024 22:24 53m 33s master
December 17, 2024 22:24 53m 33s
llama: Ensure KV cache is fully defragmented.
CI #17758: Pull request #10873 opened by jessegross
December 17, 2024 20:46 43m 48s jessegross:kv_defrag
December 17, 2024 20:46 43m 48s
server : add "tokens" output
CI #17756: Pull request #10853 synchronize by ggerganov
December 17, 2024 19:44 1h 50m 42s gg/server-content-tokens
December 17, 2024 19:44 1h 50m 42s
Use model->gguf_kv for loading the template instead of using the C API.
CI #17755: Pull request #10868 synchronize by dranger003
December 17, 2024 19:00 2h 31m 12s dranger003:master
December 17, 2024 19:00 2h 31m 12s
Improve progress bar
CI #17754: Pull request #10821 synchronize by ericcurtin
December 17, 2024 19:00 1h 40m 38s ericcurtin:progress-bar
December 17, 2024 19:00 1h 40m 38s
tests: add tests for GGUF (#10830)
CI #17753: Commit 081b29b pushed by JohannesGaessler
December 17, 2024 18:09 3h 16m 9s master
December 17, 2024 18:09 3h 16m 9s
Use model->gguf_kv for loading the template instead of using the C API.
CI #17752: Pull request #10868 synchronize by dranger003
December 17, 2024 17:23 1h 37m 9s dranger003:master
December 17, 2024 17:23 1h 37m 9s
sync : ggml
CI #17751: Commit 5437d4a pushed by ggerganov
December 17, 2024 16:36 3h 51m 53s master
December 17, 2024 16:36 3h 51m 53s
ggml : update ggml_backend_cpu_device_supports_op (#10867)
CI #17750: Commit 0006f5a pushed by ggerganov
December 17, 2024 16:35 3h 44m 12s master
December 17, 2024 16:35 3h 44m 12s
Improve progress bar
CI #17749: Pull request #10821 synchronize by ericcurtin
December 17, 2024 16:10 1h 17m 13s ericcurtin:progress-bar
December 17, 2024 16:10 1h 17m 13s
ggml : update ggml_backend_cpu_device_supports_op
CI #17748: Pull request #10867 synchronize by ggerganov
December 17, 2024 16:09 54m 56s gg/cpu-fix-cpy-iq
December 17, 2024 16:09 54m 56s
ggml : update ggml_backend_cpu_device_supports_op
CI #17746: Pull request #10867 synchronize by ggerganov
December 17, 2024 16:05 4m 1s gg/cpu-fix-cpy-iq
December 17, 2024 16:05 4m 1s
server : fill usage info in embeddings and rerank responses (#10852)
CI #17745: Commit 05c3a44 pushed by ggerganov
December 17, 2024 16:00 2h 55m 22s master
December 17, 2024 16:00 2h 55m 22s
ggml : update ggml_backend_cpu_device_supports_op
CI #17744: Pull request #10867 opened by ggerganov
December 17, 2024 15:57 8m 57s gg/cpu-fix-cpy-iq
December 17, 2024 15:57 8m 57s
llama : add Falcon3 support (#10864)
CI #17742: Commit 382bc7f pushed by ggerganov
December 17, 2024 15:24 1h 7m 34s master
December 17, 2024 15:24 1h 7m 34s
tts : add OuteTTS support
CI #17741: Pull request #10784 synchronize by ggerganov
December 17, 2024 14:35 26m 38s gg/tts-add-outetts
December 17, 2024 14:35 26m 38s