NativeMind: Your fully private, open-source, on-device AI assistant
-
Updated
Jul 23, 2025 - TypeScript
NativeMind: Your fully private, open-source, on-device AI assistant
React Native Apple LLM plugin using Foundation Models
The AI agent script CLI for Programmable Prompt Engine.
OfflineAI is an artificial intelligence that operates offline and uses machine learning to perform various tasks based on the code provided. It is built using two powerful AI models by Mistral AI.
HeyGem — Your AI face, made free
Demo project showcasing the integration and usage of Chrome’s built-in Gemini Nano AI through the window.ai interface.
A voice assistant with local LLM as a backend
🦙 chat-o-llama: A lightweight, modern web interface for AI conversations with support for both Ollama and llama.cpp backends. Features persistent conversation management, real-time backend switching, intelligent context compression, and a clean responsive UI.
AetherShell is an AI-native Linux shell assistant powered by local LLMs. It converts natural language into secure, offline shell commands using Mistral and llama.cpp — ideal for developers, sysadmins, and automation enthusiasts.
AronaOS is an offline AI-powered personal assistant that helps you manage tasks, set reminders, and recall context-aware conversations. Built with Python, Flask, and local LLMs like Phi-3 Mini, it's designed for both privacy and productivity.
Mobile-first frontend for TaskWizard — a smart AI-powered task breakdown planner. Built with Expo + React Native, connecting to a FastAPI + Ollama backend to generate actionable step-by-step roadmaps from any goal.
Efficient on-device offline AI model inference using MediaPipe with optimized model screening.
Conversational AI, local, low-latency voice assistant for Raspberry Pi 5 with LED, gesture control, and streaming LLM replies.
A real-time offline voice-to-voice AI assistant built for Raspberry Pi
A private, local RAG (Retrieval-Augmented Generation) system using Flowise, Ollama, and open-source LLMs to chat with your documents securely and offline.
A privacy-first VS Code extension that integrates with any local LLM through Ollama. Chat with AI models directly in your editor without sending data to the cloud.
Noto.ai is a local AI-powered PDF assistant that lets you chat with documents, ask questions, and get instant summaries—all offline using LLaMA 3 via Ollama. Built with Python and Kivy, it offers a clean desktop interface for students, researchers, and professionals who want to extract insights from complex PDFs efficiently and privately.
AI OCR Tool | Webcam & Image Text Recognition with Astra | Offline Summarization
The Ascend Institute, by StatikFinTech, LLC, ticker: SFTi(soon). Building and Documentation that scares the orgs/govs behind paywalls and control systems. GremlinGPT is our chaotic first Creation: learning, building, and evolving into AscendAI as it survives testing and bonds with its user. Not just a wrapper—a battle map with agency.
Official site for OmniBot - Run LLMs natively & privately in your browser
Add a description, image, and links to the offline-ai topic page so that developers can more easily learn about it.
To associate your repository with the offline-ai topic, visit your repo's landing page and select "manage topics."