Skip to content

[API] add support for embeddings api #1208

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

gcalmettes
Copy link
Contributor

@gcalmettes gcalmettes commented Jun 19, 2025

Pull Request Description

This PR adds the support for the /v1/embeddings openAI api endpoint in addition to the currently supported /v1/chat/completions and /v1/completions endpoints

Implementation considerations to be discussed:

  • Introduction of the notion of OpenAiRequestType

    • currently, all the ext proc processing of the request/response body are bound to the chat-completions/completions openAI api.
    • in order to be able to deferentially process the response body depending on the request type (e.g. embeddings response cannot be streamed and does not have a usage.completion), the categorization of the request type is done on the request flow, and different methods are called in the HandleResponseBody method based on the detected request type. This would also allow later on to easily add support for the other openAI api (e.g.: audio, images, etc ...)
  • HTTPRouteMatch:

    • currently, for every models declared, AIBrix a HTTPRoute matching /v1/chat/completions and /v1/completions path prefixes.
    • In order to not create routes that are not supported by the model (or routes that a user wants to not be available to a model if the model supports several modalities), this PR adds a new optional label model.aibrix.ai/supported-request-types on the deployment that allows to selectively define the HTTPRouteMatch to be created. If the label is not present, then the default routes will be created (/v1/chat/completions and /v1/completions)

Todo:

[ ] Tests to be added

Related Issues

Resolves: #1205

Important: Before submitting, please complete the description above and review the checklist below.


Contribution Guidelines (Expand for Details)

We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:

Pull Request Title Format

Your PR title should start with one of these prefixes to indicate the nature of the change:

  • [Bug]: Corrections to existing functionality
  • [CI]: Changes to build process or CI pipeline
  • [Docs]: Updates or additions to documentation
  • [API]: Modifications to aibrix's API or interface
  • [CLI]: Changes or additions to the Command Line Interface
  • [Misc]: For changes not covered above (use sparingly)

Note: For changes spanning multiple categories, use multiple prefixes in order of importance.

Submission Checklist

  • PR title includes appropriate prefix(es)
  • Changes are clearly explained in the PR description
  • New and existing tests pass successfully
  • Code adheres to project style and best practices
  • Documentation updated to reflect changes (if applicable)
  • Thorough testing completed, no regressions introduced

By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.

@gcalmettes gcalmettes marked this pull request as draft June 19, 2025 08:45
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @gcalmettes, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the AIBrix API gateway by introducing support for the OpenAI /v1/embeddings endpoint. It also implements a more flexible and configurable routing mechanism, allowing administrators to specify which API endpoints a deployed model should expose via a new Kubernetes label.

Highlights

  • API Endpoint Expansion: I've added full support for the /v1/embeddings OpenAI API endpoint, enabling AIBrix to handle embedding generation requests.
  • Configurable Model Routing: I've introduced a new optional deployment label, model.aibrix.ai/supported-routes, which allows specifying a comma-separated list of supported API routes (e.g., "embeddings", "chat-completions"). This provides granular control over which endpoints are exposed for a given model.
  • Default Route Behavior: If the model.aibrix.ai/supported-routes label is not present, models will default to supporting the existing /v1/completions and /v1/chat/completions endpoints, maintaining backward compatibility.
  • Embeddings Input Validation: I've implemented validation logic for /v1/embeddings requests to check the input sequence length, ensuring it's within acceptable bounds (0 to 1024 characters/tokens).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This PR adds support for the /v1/embeddings API endpoint by introducing configurable routes via labels and updating request validation logic. The core changes appear sound. Additionally, a few minor improvements regarding code clarity and robustness have been suggested.

Comment on lines 168 to 191
switch input := embeddingNewParamsInputUnionAsAny(&inputParam).(type) {
case *string:
size = len(*input)
case *[]string:
size = len(*input)
case *[]int64:
size = len(*input)
case *[][]int64:
size = len(*input)
default:
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The default case in this switch statement currently does nothing, meaning size will remain 0 if an unknown or unhandled input type is encountered from embeddingNewParamsInputUnionAsAny. While this will subsequently trigger the if size == 0 error, the error message might be misleading.

Consider explicitly handling the default case by logging a warning or returning an error indicating an unsupported/unknown input type. This would be particularly helpful if the openai-go library introduces new input types for embeddings in the future.

switch input := embeddingNewParamsInputUnionAsAny(&inputParam).(type) {
	case *string:
		size = len(*input)
	case *[]string:
		size = len(*input)
	case *[]int64:
		size = len(*input)
	case *[][]int64:
		size = len(*input)
	default:
		// Log a warning or handle unknown input type explicitly.
		// If input is nil (e.g. union was empty), size remains 0, which is handled below.
		// If input is of an unexpected non-nil type, this log helps identify it.
		if input != nil {
			klog.WarningS("unhandled embedding input type", "requestID", requestID, "inputType", fmt.Sprintf("%T", input))
		}
		// size remains 0, will be caught by the check below.
	}

return nil
}

// TODO: make asAny method publicly available on OpenAI go

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This TODO suggests making the asAny method publicly available on OpenAI go. It would be helpful to provide more context on why this method needs to be public and the potential benefits it would offer.

@gcalmettes gcalmettes force-pushed the feat/support-embeddings-api branch from 67dc5f8 to 05b8c31 Compare June 19, 2025 08:48
@gcalmettes gcalmettes force-pushed the feat/support-embeddings-api branch from 05b8c31 to a28193b Compare June 19, 2025 08:50
@gcalmettes gcalmettes force-pushed the feat/support-embeddings-api branch from 5c886d2 to c165cc0 Compare June 19, 2025 09:34
@gcalmettes
Copy link
Contributor Author

@Jeffwan @varungup90 what would be the best way to discuss implementation details regarding the changes introduced by this PR ?

The current code of the aibrix plugins is tightly coupled to the chat-completions/completions api of openAI (e.g.: the responses body are directly unmarshalled in openai.ChatCompletion (or openai.ChatCompletionChunk if stream) in the HandleResponseBody). So adding the support for the embeddings api in fact also becomes the basis to deferentially process the body based on the request type: chat completions vs embeddings (and vs other request types, audio, images, later on, as aibrix matures and can handle other modalities).

I made an implementation proposition in this PR (I still have to work on it as right now the handleChatCompletionsResponseBody and handleEmbeddingsResponseBody method have too much duplicated code (and in fact only the types the request and the usage are unmarshalled into are really changing).
But I'll be happy to discuss the implementation details in case you guys had already brainstormed about this use case.

@gcalmettes gcalmettes force-pushed the feat/support-embeddings-api branch from b1cd840 to 82e4ab9 Compare June 20, 2025 07:45
@gcalmettes gcalmettes force-pushed the feat/support-embeddings-api branch from 82e4ab9 to 10b946b Compare June 20, 2025 07:53
@Jeffwan
Copy link
Collaborator

Jeffwan commented Jun 23, 2025

@gcalmettes are you in the slack channel? we can talk about more implementation details there? a google doc also works

@Jeffwan
Copy link
Collaborator

Jeffwan commented Jun 23, 2025

@varungup90 Since we're aiming to support API compatibility, could you also help review the change proposed by @gcalmettes? The coupling issue raised is indeed a concern for future extensibility. /cc @Xunzhuo

@Xunzhuo Xunzhuo self-assigned this Jun 23, 2025
@Xunzhuo
Copy link
Collaborator

Xunzhuo commented Jun 23, 2025

assign myself for track the review, will schedule some bandwidth soon.

@gcalmettes
Copy link
Contributor Author

@Jeffwan yes I am in the slack channel (I believe you're talking about the vllm slack correct ? Or is there another slack I should join ?)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support embeddings endpoints
4 participants