Open a file and chat with the llm about it.
Load a file, it will add it to the input on each prompt as you ask questions about the data. Upload a new file to chat with a new file.
This loads the file with the langchain loaders and inserts it into the input modifier on each prompt. Haven't tested for huge files, consider this experimental.
- Install the Text Generation Web UI as per instructions on GitHub as this is an extension for that.
- Clone the chatwithfile repository:
git clone https://github.com/brucepro/chatiwthfile
. - Move the chatwithfile folder into the extensions directory of your TextGenWebUI installation.
- In the chatwithfile folder, execute
pip install -r requirements.txt
to install dependencies. Let me know if I missed any.
As always, if this adds value to your AI experience and you'd like to show your appreciation, consider supporting me, every bit helps: