Skip to content

Conversation

RushikeshYeole24
Copy link

Provided proper step logs in the Microbot and openAi_api file, Do check and let me know if anything is needed.

Providing proper step logs in the Microbot and openAi_api file
Copy link
Collaborator

@0xba1a 0xba1a left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes are looking good conceptually. Require minor changes.
Also, please run the tests and paste your outputs in the PR.

We appreciate your help!


def _create_llm(self):
if self.model_provider == ModelProvider.OPENAI:
if self.model_provider in [ModelProvider.OPENAI, ModelProvider.OPENAI_STANDARD]:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ModelProvider is an internal class. To use ModelProvider.OPENAI_STANDARD, You need to introduce that enum in the constants.py file.

Comment on lines +111 to +113
print(f"\n{'='*80}")
print(f"🚀 TASK STARTED: {task}")
print(f"{'='*80}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's convert all print statements into INFO log statements. To avoid being unresponsive in the console while running, add a WARNING log to mention Log Level is above INFO. So, no output will be printed here during normal operation. Please wait patiently.

It gives us better control over the output and log stream. Particularly useful to run in quiet mode.

return_value = {}
while self._validate_llm_response(return_value) is False:
response = self.ai_client.responses.create(
response = self.ai_client.chat.completions.create(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our intention to use responses api over chat_completion api is to depending on the model itself to maintain the context using store=true. We're yet to invest and implement that feature in detail. So, I would recommend you to stick with responses api itself as much as possible.

If you still wish to stick with chat_completion, please create a new class OpenAIChatCompletionAPI class and implement your changes. When introducing a new such class, please create one Abstract class and implement it in both of the API classes (Later we'll move to Factory design patter or Pydantic based models).

We'll have provision to include appropriate API class in CustomBots. So, the user can choose their required API class based on the model they use.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants