Releases: OuadiElfarouki/llama.cpp
Releases · OuadiElfarouki/llama.cpp
b2966
vulkan: add workaround for iterator boundary check to fix clang-cl de…
b2953
Tokenizer SPM fixes for phi-3 and llama-spm (#7375) * Update brute force test: special tokens * Fix added tokens - Try to read 'added_tokens.json'. - Try to read 'tokenizer_config.json'. - Try to read 'tokenizer.json'. * Fix special tokens rtrim Co-authored-by: Georgi Gerganov <[email protected]> * server : fix test regexes
b2843
llama-bench : add pp+tg test type (#7199)
b2825
TypoFix (#7162)
b2817
main : add --conversation / -cnv flag (#7108)
b2806
Further tidy on Android instructions README.md (#7077) * Further tidy on Android instructions README.md Fixed some logic when following readme direction * Clean up redundent information A new user arriving will see simple directions on llama.cpp homepage * corrected puncuation Period after cmake, colon after termux * re-word for clarity method seems to be more correct, instead of alternative in this context * Organized required packages per build type building llama.cpp with NDK on a pc doesn't require installing clang, cmake, git, or wget in termux. * README.md corrected title * fix trailing whitespace
b2751
Replace "alternative" boolean operator in conditional compilation dir…
b2634
llama : fix attention layer count sanity check (#6550) * llama : fix attention layer count sanity check * llama : fix parentheses in attention layer count sanity check There was otherwise a warning when compiling. --------- Co-authored-by: Francis Couture-Harpin <[email protected]>
b2624
sync : ggml
b2589
Add OpenChat, Alpaca, Vicuna chat templates (#6397) * Add openchat chat template * Add chat template test for openchat * Add chat template for vicuna * Add chat template for orca-vicuna * Add EOS for vicuna templates * Combine vicuna chat templates * Add tests for openchat and vicuna chat templates * Add chat template for alpaca * Add separate template name for vicuna-orca * Remove alpaca, match deepseek with jinja output * Regenerate chat template test with add_generation_prompt * Separate deepseek bos from system message * Match openchat template with jinja output * Remove BOS token from templates, unprefix openchat