Ask LLM directly from your terminal, written in Rust.
You need to deploy Ollama first.
# install ollama
curl -fsSL https://ollama.com/install.sh | sh
# start ollama server,
# If you install ollama using above command, an ollama server has been running at http://127.0.0.1:11434.
# So do not start another server again.
ollama serve
# pull a model from ollama, e.g. llama3.1:8b
ollama pull llama3.1:8b
You also need to download rust first.
# enter the project dir and execute
cargo install --path .
ask-rs "hello, who are you ?"
ls . | ask-rs "how many files in current dir ?"
ask-rs "write me a simple python program" -c
ask-rs -s "why the sky is blue ?"
config.toml is located at ~/.config/ask-rs/config.toml
[ollama]
host = "http://127.0.0.1"
port = 11434
model = "llama3.1:8b"
borrow examples from ollama-rs
Inspired by shell-ask