-
Notifications
You must be signed in to change notification settings - Fork 349
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Defining ai_fn at runtime #831
Comments
hi @pietz - is there a case where In [1]: import marvin
In [2]: marvin.cast("Hello, my name is Pietz.", str, "Translate the provided text to German")
Out[2]: 'Hallo, mein Name ist Pietz.'
In [3]: !marvin version
Version: 2.1.4.dev27+g8ee6b7c0
Python version: 3.12.1
OS/Arch: darwin/arm64 |
Sorry I didn't mention that. I haven't looked into the prompt template of cast but the results were pretty bad for purposes outside the core idea of cast. Generating, summarizing, translating we're all kinda bad. |
@zzstoatzz Yes I think a basic abstraction without a pre-defined prompt would be useful as well. |
@HamzaFarhan completely agree with this! It would be great to define it at runtime and also offer async. I like the decorator syntax for the ai function. It's such a nice mental model that we define a python function but because we're dealing with LLMs, we only write the docstring and not the body. Not being able to define it at runtime, is a bummer though. |
Thoughts on this? import textwrap
from enum import Enum
from inspect import cleandoc
import marvin
class ModelName(str, Enum):
GPT_3 = "gpt-3.5-turbo-0125"
GPT_4 = "gpt-4-turbo-preview"
def deindent(text: str) -> str:
return textwrap.dedent(cleandoc(text))
def message_template(message: dict[str, str]) -> str:
return deindent(f"## {message['role'].upper()} ##\n\n{message['content']}")
def chat_template(messages: list[dict[str, str]]) -> str:
chat = [message_template(message) for message in messages]
return deindent("\n\n".join(chat))
def chat_message(role: str, content: str) -> dict[str, str]:
return {"role": role, "content": content}
def user_message(content: str) -> dict[str, str]:
return chat_message(role="user", content=content)
def assistant_message(content: str) -> dict[str, str]:
return chat_message(role="assistant", content=content)
@marvin.fn(model_kwargs={"model": ModelName.GPT_3, "temperature": 0.5})
def assistant_response(convo: list[dict[str, str]]) -> str:
"""
Returns the assistant response to the conversation so far.
"""
# Returning the convo as a fromatted string gives much better results.
return chat_template(convo)
def ask_marvin(
messages: list[dict[str, str]] | None = None, prompt: str = ""
) -> list[dict[str, str]]:
messages = messages or []
if prompt:
messages.append(user_message(prompt))
if messages:
messages.append(assistant_message(assistant_response(messages)))
return messages
messages = [
user_message("It's my first day at a new job."),
user_message("The commute is an hour long."),
]
messages = ask_marvin(messages=messages, prompt="How should I pass the time?")
# [{'role': 'user', 'content': "It's my first day at a new job."},
# {'role': 'user', 'content': 'The commute is an hour long.'},
# {'role': 'user', 'content': 'How should I pass the time?'},
# {'role': 'assistant',
# 'content': "That's exciting! You could listen to podcasts, read a book, or plan your day ahead during the commute."}] |
Personally, I'm not looking for a chat interface. Maybe I misunderstood you. This is my workaround for now: from marvin.ai.text import _generate_typed_llm_response_with_tool
from marvin.ai.prompts.text_prompts import FUNCTION_PROMPT
async def ai_function(
instruction: str,
inputs: dict,
output_type: Any
):
# Call the language model to generate the output
result = await _generate_typed_llm_response_with_tool(
prompt_template=FUNCTION_PROMPT,
prompt_kwargs=dict(
fn_definition=instruction,
bound_parameters=inputs,
return_value=str(output_type), # Assuming return_annotation is a string representation
),
type_=output_type
)
return result It's "just" the normal marvin function but I can define it at runtime. I'm happy with my workaround. Everything else is just convenience. |
can you share an example how you use this versus the ai_fn decorator? Thanks. |
First check
Describe the current behavior
I'm working on a project where I basically want what the
ai_fn
is doing but I need to be able to define it at runtime. I might be missing something but that doesn't seem possible at the moment. I even tried setting the docstring through__doc__
but it just doesn't work.Describe the proposed behavior
I think it would be nice to have the functionality of the
ai_fn
decorator through an actual function like cast and extract. According to the current naming conventions write() could be a good name but it might be worth discussing this a bit more.Example Use
Additional context
No response
The text was updated successfully, but these errors were encountered: