Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

heroku additions - initial test #7678

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

hillarysanders
Copy link

@hillarysanders hillarysanders commented Jan 10, 2025

Title

(TODO, still WIP)

Relevant issues

Type

πŸ†• New Feature
πŸ› Bug Fix
🧹 Refactoring
πŸ“– Documentation
πŸš„ Infrastructure
βœ… Test

Changes

[REQUIRED] Testing - Attach a screenshot of any new tests passing locally

If UI changes, send a screenshot/GIF of working UI fixes

Copy link

vercel bot commented Jan 10, 2025

The latest updates on your projects. Learn more about Vercel for Git β†—οΈŽ

Name Status Preview Comments Updated (UTC)
litellm βœ… Ready (Inspect) Visit Preview πŸ’¬ Add feedback Jan 10, 2025 11:41pm

from litellm.litellm_core_utils.litellm_logging import Logging as LiteLLMLoggingObj


class HerokuError(BaseLLMException):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we move this to a common_utils.py outside of the /chat

make it easier for someone to click on this file and directly see the transformation logic

# Assuming `raw_response` is an httpx.StreamingResponse object
for chunk in raw_response.iter_text():
streamed_content.append(chunk)
logging_obj.post_call(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's not call post_call in here - it'll get emitted on every chunk

stream = optional_params.get("stream", False)
print_verbose = optional_params.get("print_verbose", False)

if stream:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

streaming logic won't enter here - it'll go to 'get_model_response_iterator'

def get_model_response_iterator(

"""
Transform the response from the API.

Handles both standard and streamed responses.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no don't handle both in here. we have a separate event hook for streaming -

def get_model_response_iterator(

)

# Call the base method for standard (non-streaming) responses
return super()._transform_response(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if no translation is required, this function doesn't need to be implemented (i see you inherit from openailike config, which should take care of this)

@krrishdholakia
Copy link
Contributor

Hi @hillarysanders left a few comments, let me know when this pr is ready for review.

Ideally it would also include testing - see: https://github.com/BerriAI/litellm/blob/main/tests/llm_translation/test_deepseek_completion.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants