A simple tool to anonymize LLM prompts.
Uses urchade/gliner_multi_pii-v1 for named entity recognition (NER).
Watch the demo to see Elara in action.
- SvelteKit fullstack web app running on port
5173
- Python webserver to interact with the model running on port
8000
First, if you don't have uv
installed on your system, install it with the following commands (uv
allows for easy package and version management for Python projects):
# On macOS and Linux.
curl -LsSf https://astral.sh/uv/install.sh | sh
# On Windows.
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
Then, start the Python webserver with uv
:
cd python && uv run --python 3.12 --with-requirements requirements.txt main.py
Wait until you see INFO: Application startup complete.
in the terminal before running and using the SvelteKit app (ensures that the model has been loaded and the server is ready to handle requests).
Run the SvelteKit app with npm
:
cd sveltekit && npm i && npm run dev
- Open the SvelteKit app in your browser at
http://localhost:5173
. - Paste/write text into the "ORIGINAL TEXT" textarea.
- Click the "SUBMIT" button to anonymize the text.
- Copy the anonymized text, which will appear in the "ANONYMIZED TEXT" card.
- Paste the anonymized text into an LLM of your choice, and generate a response.
- Copy the LLM's response and paste it into the "ANONYMIZED LLM RESPONSE" textarea.
- The "DE-ANONYMIZED TEXT" card will show the de-anonymized version of the LLM's response, which you can copy and use as needed.
- If you'd like to modify any labels, please add or remove lines from
labels.txt
in the project's root.