Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Function calling vllm #8

Merged
merged 32 commits into from
Jul 30, 2024
Merged

Function calling vllm #8

merged 32 commits into from
Jul 30, 2024

Conversation

maxDavid40
Copy link
Collaborator

  • Ensure vllm's function calling compatibility with happy_vllm.
  • Add argparse arguments to facilitate API deployment with function calling.
  • Create a FunctionTool class to support future functions for function calling and create two examples (Weather and Music)
  • Add Depends to modify requests before POSTing to chat/completion.

@maxDavid40 maxDavid40 requested a review from mfournioux July 18, 2024 14:55
Copy link
Collaborator

@mfournioux mfournioux left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add in the readme file some documentation about function calling functionnalities added

tests/test_schemas_functional.py Outdated Show resolved Hide resolved

# Load the tools
update_tools(args=args)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Should we integrate tools arguments server side? Should we not keep them api side, as it is done in the roadmap of openai apis?
  • If an additional tool is needed, the server needs to be re launched.
  • If tools are defined on server side, some tools usage conflicts can appear on production during inference if several requests are launched while needing calling differents tools (conflict on tool naming for instance).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, the implementation doesn't match with happy_vllm ideas, so :

  • Move to example directory to keep the implementation because be useful for someone to need deploy on local
  • We could create a new route which could update the tool_choice and tools variables (avoid to use in production)

else:
data_dict['tools'] = tools["tools"]
if data_dict['top_logprobs'] == 0:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is it necessary to re assign a None value to "top_logprobs" variable?

Copy link
Collaborator Author

@maxDavid40 maxDavid40 Jul 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ChatCompletionRequest have top_logprobs(int) and logprobs(bool) optional and a method(pydantic) to check if top_logprobs then logprobs should be True.
When I get the ChatCompletionRequest.data instantiate with top_logprobs=None and logprobs=None, we get top_logprobs=0 and logprobs=False and we have to rewritte top_logprobs with None to avoid method check error at the new ChatCompletionRequest's instaciation

examples/function_tools/README.md Outdated Show resolved Hide resolved
examples/function_tools/README.md Outdated Show resolved Hide resolved
examples/function_tools/README.md Outdated Show resolved Hide resolved
examples/function_tools/README.md Outdated Show resolved Hide resolved

## To know

With vllm 0.5.0 to 0.5.3.post1, tool_choice's option ```auto``` and ```required``` are not yet implemented. You can only use one function by deployement :
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it one function for each deployement or one function for each call ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for each deployement

tests/conftest.py Outdated Show resolved Hide resolved
tests/conftest.py Outdated Show resolved Hide resolved
tests/test_routers_functionnal.py Outdated Show resolved Hide resolved
tests/test_routers_functionnal.py Outdated Show resolved Hide resolved
tests/test_routers_functionnal.py Outdated Show resolved Hide resolved
examples/function_tools/README.md Outdated Show resolved Hide resolved
examples/function_tools/README.md Outdated Show resolved Hide resolved
TOOLS = ['weather']
```

## Called functions
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we specify that this endpoints has to be called, only if happy_vllm server is deployed with pre defined tools?


Each attributes corresponding to [Openai api](https://platform.openai.com/docs/api-reference/chat/create#chat-create-tools)

After the class is created, you have to declare it in TOOLS_DICT and TOOLS global variables in ```routers.shcema.functional``` (add weather function tool for example).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo to correct : '''routers.schema.functional'''

@mfournioux mfournioux merged commit b604264 into main Jul 30, 2024
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants