-
-
Notifications
You must be signed in to change notification settings - Fork 78
feat(ai): llm provider #539
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
kallebysantos
wants to merge
14
commits into
supabase:develop
Choose a base branch
from
kallebysantos:feat-llm-provider
base: develop
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- LLM Section is a wrapper to handle LLM inference based on the selected provider
- Extracting json parsers to a separated file - Moving LLM stream related code to a separated folder
- Applying LLM provider interfaces to implement the Ollama provider
- Applying LLM provider interfaces to implement the 'openaicompatible' mode
- Improving Typescript support for dynamic suggestion based on the selected Session type. - Break: Now LLM models must be defined inside `options` argument, it allows a better typescript checking as well makes easier to extend the API. - There's no need to check if `inferenceHost` env var is defined, since we can now switch between different LLM providers. Instead, we can enable LLM support if the given type is an allowed provider.
- Improving typescript with conditional output types based on the selected provider - Defining common properties for LLM providers like `usage` metrics and simplified `value`
- OpenAI uses a different streaming alternative that ends with `[DONE]`
- Applying 'pattern matching' and 'Result pattern' to improve error handling. It enforces that users must first check for errors before consuming the message
- It ensures that only valid strings with content can be embeded
- Fix wrong input variable name. - Accepting 'opts' param as optinal, applying null safes.
- Improving tests by checking the result types: success or errors - Testing invalid `gte-small` type name
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What kind of change does this PR introduce?
feature, refactor
What is the current behaviour?
Current the
Session
only supports self-hosted Ollama or some OpenAI like provider - with no way to specify the API key.What is the new behaviour?
This PR applies some refactors in the
ai
module to support an unified LLM provider API, this way it can be easily extended to new providers as well exporting a more standardised output format.Improved typescript support
The
ai
module was huge refactored to provide better ts hints that dynamically changes based on the selectedtype
:examples
using type

gte-small
:using type

ollama
:using type

openaicompatible
:Automatically infer
AsyncGenerator
type whenstream: true
Improved error handling support
In order to ensure error checking, the
ai
module was been refactored to followResult
pattern - Go like. It means that whileSession.run()
the returned value will be a tuple array of[success, error]
, this result is compatible with TS pattern matching, so it provides a completely LSP feedback.examples
Non stream
Result type def
Checking
error
automatically validates thesuccess
partStream
When
stream: true
the first result will handle errors that may occur before create theAsyncGenerator
.Then the incoming message will be a result as well, this way users can apply error handling while streaming.
Result type def

Streaming type def

Common response and Usage metrics
Since all LLM providers must implement a common interface they now also shared a unified response object.
response definitions
Success part
Error part
Tested OpenAI compatible providers
missing
ideas