- Provide a usable user interface to interact with local SLMs (small language models) locally, on-device
- Allow users to add/remove SLMs (GGUF models) and modify their system prompts or inference parameters (temperature, min-p)
- Allow users to create specific-downstream tasks quickly and use SLMs to generate responses
- Simple, easy to understand, extensible codebase
- Clone the repository with its submodule originating from llama.cpp,
git clone --depth=1 https://github.com/shubham0204/SmolChat-Android
cd SmolChat-Android
git submodule update --init --recursive
-
Android Studio starts building the project automatically. If not, select Build > Rebuild Project to start a project build.
-
After a successful project build, connect an Android device to your system. Once connected, the name of the device must be visible in top menu-bar in Android Studio.
-
The application uses llama.cpp to load and execute GGUF models. As llama.cpp is written in pure C/C++, it is easy to compile on Android-based targets using the NDK.
-
The
smollm
module uses allm_inference.cpp
class which interacts with llama.cpp's C-style API to execute the GGUF model and a JNI bindingsmollm.cpp
. Check the C++ source files here. On the Kotlin side, theSmolLM
class provides the required methods to interact with the JNI (C++ side) bindings. -
The
app
module contains the application logic and UI code. Whenever a new chat is opened, the app instantiates theSmolLM
class and provides it the model file-path which is stored by theLLMModel
entity in the ObjectBox. Next, the app adds messages with roleuser
andsystem
to the chat by retrieving them from the database and usingLLMInference::add_chat_message
. -
For tasks, the messages are not persisted, and we inform to
LLMInference
by passingstore_chats=false
toLLMInference::load_model
.
-
ggerganov/llama.cpp is a pure C/C++ framework to execute machine learning models on multiple execution backends. It provides a primitive C-style API to interact with LLMs converted to the GGUF format native to ggml/llama.cpp. The app uses JNI bindings to interact with a small class
smollm. cpp
which uses llama.cpp to load and execute GGUF models. -
ObjectBox is a on-device, high-performance NoSQL database with bindings available in multiple languages. The app uses ObjectBox to store the model, chat and message metadata.
-
noties/Markwon is a markdown rendering library for Android. The app uses Markwon and Prism4j (for code syntax highlighting) to render Markdown responses from the SLMs.
The following features/tasks are planned for the future releases of the app:
- Assign names to chats automatically (just like ChatGPT and Claude)
- Add a search bar to the navigation drawer to search for messages within chats using ObjectBox's query capabilities
- Add a background service which uses BlueTooth/HTTP/WiFi to communicate with a desktop application to send queries from the desktop to the mobile device for inference
- Enable auto-scroll when generating partial response in
ChatActivity
- Measure RAM consumption
- Add app shortcuts for tasks
- Integrate Android-Doc-QA for on-device RAG-based question answering from documents
- Check if llama.cpp can be compiled to use Vulkan for inference on Android devices (and use the mobile GPU)
- Check if multilingual GGUF models can be supported