This project builds a palette that empowers AAC(Augmentative and Alternative Communication) users with the ability to personalize it according to their specific requirements, thereby enhancing their communication capabilities with others.
The front end of the project is built with Preact.
To work on the project, you need to install NodeJS and NPM for your operating system.
Then, clone the project from GitHub. Create a fork
with your GitHub account, then enter the following in your command line
(make sure to replace your-username
with your username):
git clone https://github.com/your-username/adaptive-palette
From the root of the cloned project, enter the following in your command line to install dependencies:
npm ci
To start a local web server, run:
npm start
To start a local web server for development that every change to the source code will be watched and redeployed, run:
npm run dev
The website will be available at http://localhost:3000.
RAG (Retrieval-Augmented Generation) is an AI technique designed to enhance the accuracy of generative models by incorporating factual knowledge from external sources. It requires loading factual knowledge into a vector store that will be quried to provide relevant information to the language model as a context.
By default, the use of RAG is turned off in the system. The enableRag
flag is set to false
by default in the
config/config.ts.
Follow these steps to complete a one-time setup to enable RAG in the system:
-
Load a document into the vector store
Use the
scripts/loadDocIntoVectorDb.js
script to populate the vector store. Run the following command from the project root directory:node scripts/loadDocIntoVectorDb.js [location-of-document] [target-dir-of-vector-db]
-
Configure the application
Update the config/config.ts file to specify the path to the vector store directory and set the flag
enableRag
totrue
:export const config = { // ... other configurations rag: { enableRag: true, vectorStoreDir: "[path-to-vector-db-directory]" } ... };
Note: The
vectorStoreDir
defaults to./vectorStore
. Modify the value to match where your vector store is located. When a relative path is used, the path is relative to the project root directory. -
Restart the server
Follow the instruction in the Start a Server section.
To lint the source code, run:
npm run lint
To run tests, run:
npm test
The sub-folder demos
contains code for a number of demonstrations.
These are short examples. The apps
folder contains more fully
built-out application examples. See the respective READMEs for instructions on
how to run the software.
- Ollama Chat Web-App: a chat application running on
localhost
that provides access to multiple LLMs using the Ollama localhost web service. - Palette Generator Web-App: an application for generating and saving a palette using the Bliss gloss. By providing a set of gloss words, BCI AV IDs, or svg builder strings, the Bliss gloss is searched and a palette is generated based on matches found.
- Ollama Chat Service Demo: a
simple web-app that runs on
localhost
for sending queries to an Ollama chatbot service also running onlocalhost
.
The adaptive-palette can be served as a production preview using Cloudflare Pages, specifically using the Git integration guide. You will need to have your own Cloudflare account to do this.
In the "Deployment details" for the preview, use the following for the "Build command" and "Build output directory" settings:
- Build command:
npm run build:client
- Build output directory::
/dist/client