-
Notifications
You must be signed in to change notification settings - Fork 2
Providing proper step logs #40 #42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Providing proper step logs in the Microbot and openAi_api file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes are looking good conceptually. Require minor changes.
Also, please run the tests and paste your outputs in the PR.
We appreciate your help!
|
||
def _create_llm(self): | ||
if self.model_provider == ModelProvider.OPENAI: | ||
if self.model_provider in [ModelProvider.OPENAI, ModelProvider.OPENAI_STANDARD]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ModelProvider
is an internal class. To use ModelProvider.OPENAI_STANDARD
, You need to introduce that enum in the constants.py file.
print(f"\n{'='*80}") | ||
print(f"🚀 TASK STARTED: {task}") | ||
print(f"{'='*80}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's convert all print
statements into INFO
log statements. To avoid being unresponsive in the console while running, add a WARNING
log to mention Log Level is above INFO. So, no output will be printed here during normal operation. Please wait patiently
.
It gives us better control over the output and log stream. Particularly useful to run in quiet mode.
return_value = {} | ||
while self._validate_llm_response(return_value) is False: | ||
response = self.ai_client.responses.create( | ||
response = self.ai_client.chat.completions.create( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our intention to use responses
api over chat_completion
api is to depending on the model itself to maintain the context using store=true
. We're yet to invest and implement that feature in detail. So, I would recommend you to stick with responses
api itself as much as possible.
If you still wish to stick with chat_completion
, please create a new class OpenAIChatCompletionAPI
class and implement your changes. When introducing a new such class, please create one Abstract class
and implement it in both of the API classes (Later we'll move to Factory design patter or Pydantic based models).
We'll have provision to include appropriate API class in CustomBots
. So, the user can choose their required API class based on the model they use.
Provided proper step logs in the Microbot and openAi_api file, Do check and let me know if anything is needed.