Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

liteLLM bug #476

Closed
wants to merge 9 commits into from
Closed

Conversation

sarath59
Copy link

📥 Pull Request

📘 Description

The problem occurs because:

  1. When both libraries are imported, the override methods are competing for the same OpenAI completion methods
  2. LiteLLM internally uses OpenAI's format, and when both are imported, the overrides are interfering
  3. The global variables storing original methods are being overwritten

🧪 Testing

Test Coverage

  1. Synchronous and asynchronous API calls
  2. Streaming and non-streaming responses
  3. Method override and restoration
  4. Error handling and logging
  5. Multiple consecutive API calls
  6. Integration with LiteLLM present

Validation Results

  1. Successfully tracked OpenAI API calls with LiteLLM imported
  2. Properly handled method overrides and restorations
  3. Maintained compatibility with both sync and async operations
  4. Verified proper error handling and logging
  5. Confirmed session management and cleanup
    Screenshot 2024-10-31 at 11 16 41 AM
    Screenshot 2024-10-31 at 11 17 28 AM

@areibman
Copy link
Contributor

areibman commented Nov 1, 2024

Yo @sarath59 ! Looks like someone else raised the same PR at the same time you did haha. Figuring out which one we should go with/merge both

@sarath59
Copy link
Author

sarath59 commented Nov 1, 2024

I was rechecking the code again and again because of the conflicts. No worries.

@sarath59
Copy link
Author

sarath59 commented Nov 6, 2024

Hey @areibman , did you get a chance to look at this PR? Let me know if you want me to close this one if you moved forward with other PR. Thanks

@areibman areibman requested review from a team and removed request for a team November 6, 2024 20:22
@areibman
Copy link
Contributor

areibman commented Nov 6, 2024

This works, thanks @sarath59!
image


In [8]: message = [{"role": "user", "content": "Write a 12 word poem about secret agents."}]

In [9]: litellm.completion(model="claude-3-sonnet-20240229", messages=message, temperature=0)
Out[9]: ModelResponse(id='chatcmpl-7f8c0a42-a784-44ce-a3fc-673feb469dcf', choices=[Choices(finish_reason='stop', index=0, message=Message(content="Here's a 12 word poem about secret agents:\n\nShadows lurk, secrets kept,\nAgents slink, missions adept.\nDanger's thrill, adrenaline rush,\nCovert ops, whispers hushed.", role='assistant', tool_calls=None, function_call=None))], created=1730924756, model='claude-3-sonnet-20240229', object='chat.completion', system_fingerprint=None, usage=Usage(completion_tokens=61, prompt_tokens=18, total_tokens=79, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=None, cached_tokens=0, text_tokens=None, image_tokens=None), cache_creation_input_tokens=0, cache_read_input_tokens=0))

In [10]: client.chat.completions.create(model="gpt-4o", messages=message, temperature=0)
Out[10]: ChatCompletion(id='chatcmpl-AQgtGVEgUBeZytGKDD8PwHhlsaQvG', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='In shadows they move, whispers of truth, cloaked in mystery, unseen guardians.', refusal=None, role='assistant', audio=None, function_call=None, tool_calls=None))], created=1730924762, model='gpt-4o-2024-08-06', object='chat.completion', service_tier=None, system_fingerprint='fp_159d8341cc', usage=CompletionUsage(completion_tokens=17, prompt_tokens=17, total_tokens=34, completion_tokens_details=CompletionTokensDetails(audio_tokens=0, reasoning_tokens=0, accepted_prediction_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0)))

In [11]: agentops.end_session("Success")
🖇 AgentOps: This run's cost $0.001182
🖇 AgentOps: Session Replay: https://app.agentops.ai/drilldown?session_id=76796998-df08-415c-aa5e-1993e101f1ed

Leaving some comments on the PR. I know @teocns and @the-praxs have some thoughts also

@sarath59
Copy link
Author

sarath59 commented Nov 6, 2024

Thanks @areibman

agentops/llm_tracker.py Show resolved Hide resolved
import agentops

# Load environment variables
load_dotenv()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are running tests through tox (pip install tox and run tox in the CLI to spin up all the tests). We're failing right now because of dotenv

agentops/llm_tracker.py Show resolved Hide resolved
tests/test_llmintegration.py Show resolved Hide resolved
@the-praxs the-praxs linked an issue Nov 6, 2024 that may be closed by this pull request
3 tasks
@teocns
Copy link
Contributor

teocns commented Nov 7, 2024

Added VCR for offline replay and changed a bit the test dynamics to be more deterministic, however this still fails for me and does not send openai events to agentops dashboard's session replay either 🤔


@pytest.mark.vcr
def test_openai_litellm_tango(llm_event_spy, openai_client, litellm_client):
    """Test that LiteLLM integration does not break OpenAI from sending events"""
    message = [{"role": "user", "content": "Write a 3 word sentence."}]

    litellm_client.completion(
        model="claude-3-sonnet-20240229", messages=message, temperature=0
    )

    assert llm_event_spy["litellm"].call_count == 1

    openai_client.chat.completions.create(
        model="gpt-4", messages=message, temperature=0
    )

    assert llm_event_spy["openai"].call_count == 1

Maybe something slipped under my eye, or maybe it has to do with the order of iimport/nitialization

FAILED tests/test_llmintegration.py::test_openai_litellm_tango - assert 0 == 1

litellm passes, openai not

@teocns teocns force-pushed the liteLLMbug/AgentOps branch 2 times, most recently from 8ca4dc3 to 37930fe Compare November 7, 2024 00:15
@the-praxs the-praxs self-requested a review November 8, 2024 14:58
sarath59 and others added 6 commits November 10, 2024 00:11
refactor(test_llmintegration.py): restructure tests to use fixtures for
AgentOps and clients, enhancing maintainability and readability
@teocns
Copy link
Contributor

teocns commented Nov 18, 2024

the branch backing this PR was 150-180 commits behind before recently rebasing; something must've got in between. From internal discussions it emerged that now-deprecated LLMTracker interfering might be the actual issue

@sarath59 sarath59 closed this Nov 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: LiteLLM conflicts with OpenAI
4 participants