Skip to content

Can we use Litellm to make a call to API BASE which calls llama3 for text generation? #4359

Closed Answered by ishaan-jaff
Neel-Shah-29 asked this question in Q&A
Discussion options

You must be logged in to vote

Hi @Neel-Shah-29 your endpoint looks like a text-completion endpoint - do this instead

import os
from litellm import completion

# Set environment variables if needed (not used for proxy)
os.environ["OPENAI_API_KEY"] = "anything"

# Define the messages
messages = [{"content": "Once upon a time", "role": "user"}]

# Make the request using LiteLLM
response = completion(
    model="meta-llama/Meta-Llama-3-8B",
    messages=messages,
    api_base="http://0.0.0.0:5000/v1",
    custom_llm_provider="text-completion-openai"
)

# Print the response
print(response)

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by ishaan-jaff
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants