-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provider-Agnostic Call Decorator #729
Comments
Another thing worth thinking about would be runtime overrides. For example: from mirascope import llm
@llm.call("openai:gpt-4o-mini")
def recommend_book(genre: str) -> str:
return f"Recommend a {genre} book"
response = recommend_book(
"fantasy",
model_override="anthropic:claude-3-5-sonnet-latest",
call_params_override={"temperature": 0.7},
)
print(response.content) Super easy to inject these with type hints I believe as keyword arguments, but I'm not 100% certain yet. This would be a nice feature to include so that the decorated function is truly provider-agnostic (where the model provided in the decorator works as a default). |
To give me two cents about this: I would personally be in favour of splitting the provider and model name. IMO Parsing I suggest something like this instead: @llm.call(provider="openai", model="gpt-4o-mini") |
@Kigstn |
Ah nice, that alleviates by biggest concern! I think I would personally still prefer separating the provider & model. In my eyes they are two different things and thus should not be connected :) |
While I've implemented the I've created a PR for this issue. While it works as expected at runtime, we're finding it difficult to implement our intended type specifications in Python. Specifically, we can't add keyword arguments like After thoroughly reviewing the PEP for As alternative solutions, I've considered: response = recommend_book.override(
model_override="anthropic:claude-3-5-sonnet-latest",
call_params_override={"temperature": 0.7},
)("fantasy") or: response = llm.create_call(recommend_book,
model_override="anthropic:claude-3-5-sonnet-latest",
call_params_override={"temperature": 0.7},
)("fantasy") However, the first approach feels unnatural as it adds methods to what should be a function, and the second approach might be somewhat less intuitive to use. |
Description
We currently support provider-agnostic flows through prompt templates combined with call decorators to produce multiple provider-specific calls. I think we can further improve this by providing a single decorator that supports calling any provider+model pairing.
My rough thought is as follows:
The
CallResponse[ResponseTypeT]
return type would operate just like a provider-specific call response except that it would coerce certain fields (e.g.finish_reason
into a more standardized format. I also think that messages such asresponse.message_param
should returnBaseMessageParam
in this case so that theCallResponse
class can operate more truly as a provider-agnostic call response.We will also need to create additional provider-agnostic classes (e.g.
Stream[...]
andTool[...]
) so that we can return correctly typed objects down to the original response.This should also make it possible to switch providers in the middle of a chat. For example, if you're maintaining a history of messages purely as
BaseMessageParam
instances, you could switch to Anthropic if you're getting rate limited by OpenAI without having to worry about conversion etc.On the naming front, do we like
from mirascope import llm
andllm.call
or should we instead do something likeimport mirascope
andmirascope.call
? For example:One note is that we'll need to be more vigilant around updating the accepted model strings since we'll have to use
Literal[...]
typing with overloads to get proper typing (meaning that new models won't have correct typing until we add them). This isn't difficult, just something to be mindful of. Also worth looking into if this is actually the case or if there's some way to do a form of more general matching on the string.Lastly, it might be worth updating the documentation to use this general form everywhere. This would enable removing a level of tabs from the docs that are somewhat difficult to maintain. Of course, we should still document the provider-specific usage, but I think we should reverse the ratio once this is implemented (i.e. have a provider-specific section and then have everything else be the provider-agnostic form). This means we should also add a new section (either page or full tab) that clearly shows supported providers as well as what each provider supports across Mirascope features. A simple table would likely suffice. My current preference leans toward a tab rather than a page.
It's also likely worth continuing to write examples for every provider-specific form for everything we support as a means of testing (since we run pyright on the examples) even if we don't render the examples in the docs. Need to figure out if the maintenance cost is worthwhile or if there's a better approach that would be easier to maintain.
The text was updated successfully, but these errors were encountered: