Personal project for natural speech with Large Language Models. Will possibly update it as the open source technology evolves and the capabilities of local LLMs improve.
- Python >=3.9 && <3.12
- LM Studio
LM Studio is an application that lets you download LLMs and prompt them via local server, using an API similar to that of OpenAI. Works completely offline, no connection to the exterior. Get it at https://lmstudio.ai. I suggest the Llama-2-7B model to get started.
- Windows (tested on Windows 10)
- Install LM Studio
- Select the server tab
<->
and use port8080
for talking to the LLM. If you need to use another port, change thebase_url
inllms/LMStudio
accordingly. - Clone this repository
- Change directory into the cloned one
- Create virtual environment (if on VS Code, tick the option to install requirements and skip step 7) Currently, advised method is to use venv. Documentation for VS Code here
- Install dependencies:
$ pip install -r requirements.txt
- Import the
startPrompting()
function fromspeakeasyLlm.py
from speakeasy import startPrompting
- Call the
startPrompting()
function with the prompt string as argument - Press space bar to start recording your message, and again to stop recording. Wait for the LLM's response.
- Clone the repository again
- Change directory into the cloned folder
- Create virtual environment
- Install
wheel
:pip install --upgrade wheel
- Install
ffmpeg-python
:pip install ffmpeg-python
- Install dependencies, making sure nothing is cached and everything is redownloaded:
pip install --no-cache-dir --ignore-installed -r requirements.txt
- Import the
startPrompting()
function fromspeakeasyLlm.py
from speakeasy import startPrompting
- Call the
startPrompting()
function with the prompt string as argument - Press space bar to start recording your message, and again to stop recording. Wait for the LLM's response.
If you want to use a different LLM API or method of communication with the model, you need to have a class that implements the Promptable
Protocol.
from protocols.promptable import Promptable
- Define the method
submitPrompt(prompt)
, whereprompt
is the string that will be sent to the LLM. This method must also return the LLM's response as a string.