Lumos: Enhancing Data Visualizations with LLM-based Natural Language Interfaces
You will need Docker (Docker Engine & Docker Compose) and an OpenAI API key.
- Create a .env file containing the OpenAI API key stored in the variable
OPENAI_API_KEY
- Open terminal or powershell
- Build container
- Windows:
docker-compose build
- Linux:
docker compose build
- Windows:
- Run system
- Windows:
docker-compose up
- Linux:
docker compose up
- Windows:
- Initialize Backend
- cd into scripts folder and run
npm install
npm run datasets
to import datasetsnpm run usecase
to import use case llm
- cd into scripts folder and run
- Open Frontend
If you want to run tests on your own:
- cd into scripts folder
npm run tests
to import testsnpm run runtests
to run ALL tests. This may require a high balance on your OpenAI account. Alternatively you can execute single tests in the UI Tests sectionnpm run extract
to extract all tests / test results to a json file
When you interact with the chat, the message thread is persisted in the OpenAI backend, but not yet in the Lumos system. This means if you re-open the frontend, there is a blank chat window. But behind the scenes there is still your old messages on OpenAI. So the LLM might return confusing information, like control information containing a different color scheme from a previous session. For a fresh thread, you need to remove the LLM and the related prompt via API calls.