-
Notifications
You must be signed in to change notification settings - Fork 10.1k
Home
wzy edited this page Aug 16, 2023
·
7 revisions
Welcome to the llama.cpp wiki!
yay -S llama-cpp
yay -S llama-cpp-cuda
yay -S llama-cpp-opencl
nix run github:ggerganov/llama.cpp
nix run 'github:ggerganov/llama.cpp#opencl'
{ config, pkgs, ... }:
{
nixpkgs.config.packageOverrides = pkgs: {
llama-cpp = (
builtins.getFlake "github:ggerganov/llama.cpp"
).packages.${builtins.currentSystem}.default;
};
};
environment.systemPackages = with pkgs; [ llama-cpp ]
}
Wait https://github.com/termux/termux-packages/pull/17457.
apt install llama-cpp
pacman -S llama-cpp
git clone --depth=1 https://github.com/ggerganov/llama.cpp
cd llama.cpp
cmake -Bbuild
cmake --build build -D...
cd build
cpack -G DEB
dpkg -i *.deb
git clone --depth=1 https://github.com/ggerganov/llama.cpp
cd llama.cpp
cmake -Bbuild
cmake --build build -D...
cd build
cpack -G RPM
rpm -i *.rpm
Useful information for users that doesn't fit into Readme.
- Home
- Feature Matrix
- GGML Tips & Tricks
- Chat Templating
- Metadata Override
- HuggingFace Model Card Metadata Interoperability Consideration
These are information useful for Maintainers and Developers which does not fit into code comments
Click on a badge to jump to workflow. This is here as a useful general view of all the actions so that we may notice quicker if main branch automation is broken and where.