-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhance integration capabilities by enabling passing of initial prompt to LLM from command line #1097
base: main
Are you sure you want to change the base?
Enhance integration capabilities by enabling passing of initial prompt to LLM from command line #1097
Conversation
Hi @funwarioisii |
Hi @MikeBirdTech The reasons for the changes to core are as follows. First, let me explain my understanding of the original implementation.
open-interpreter/interpreter/terminal_interface/terminal_interface.py Lines 71 to 74 in 57aeea6
And if open-interpreter/interpreter/terminal_interface/terminal_interface.py Lines 433 to 435 in 57aeea6
So, going back to the beginning, if we pass messages in the initial state, Therefore, I believe it is necessary to create a special bypass for |
watch interpreter messages state, we can check the message is loaded or not
I thought about it for a while and found a good way. |
Describe the changes you have made:
I have added the ability to pass initial prompts from the command line to the LLM to improve integration with other files and processes.
For example, when I write DDL in a web application, I can make the work more routine to have the model file and its tests written.
Here is what I do when I am developing a Rails application.
After writing the DDL, I create the model files, add validation, write mock, and add type information for GraphQL.
exmaple(click me)
Reference any relevant issues (e.g. "Fixes #000"):
Pre-Submission Checklist (optional but appreciated):
docs/CONTRIBUTING.md
docs/ROADMAP.md
OS Tests (optional but appreciated):