Skip to content

Add Verbose Option into cot_reflection.py and Fixed litellm_wrapper.py #192

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

Khao0
Copy link

@Khao0 Khao0 commented May 30, 2025

I added a verbose option because I found the logging useful for debugging LLM results when using cot_reflection. However, it also logs a lot of HTTP-related info, which can be distracting. With this option, it's easier to filter out unnecessary logs and focus on what really matters during debugging.

I fixed an issue with the litellm_wrapper.py file. When running predictions locally, everything worked fine. But when I deployed the model to a local server, it still tried to use the local model instead of the one on the server. I made changes to ensure the correct model is used depending on the environment.

This open-source project has been really helpful for my work—thank you I really appreciate your work.

@CLAassistant
Copy link

CLAassistant commented May 30, 2025

CLA assistant check
All committers have signed the CLA.

@Khao0 Khao0 changed the title Add khao0 and Add Verbose Option into cot_reflection Add Verbose Option into cot_reflection.py and Fixed litellm_wrapper.py May 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants