-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
supporting fine tuning for OAI #441
Comments
As this issue partly depend on #413, querying fine tuned models from the xef-server is not support yet. Currently, all requests are streamed from OpenAI without going through the xef-core logic. Resolving a model based on it's name and it's base model's name might look like this later: fun spawnCustomModel(provider: Provider, baseModelName: String, fineTunedModelName: String): LLM {
val baseModel = when(provider) {
Provider.OPENAI -> com.xebia.functional.xef.conversation.llm.openai.OpenAI().supportedModels().find { it.modelType.name == baseModelName }
else -> TODO()
} ?: error("base model $baseModelName not found")
return if(baseModel is FineTuneable)
baseModel.fineTuned(fineTunedModelName)
else error("model $baseModelName supports no fine tuning")
// we cannot know at this point if the fine tuned model exists
} |
This issue should not depend on #415 , we are not following that approach for now. We need to add the fine-tuning endpoint to the Xef server following what it's doing in the main now and forwarding directly to Open AI. I am happy to discuss this online in Slack if you need further clarification. |
* query fine tuned models (from branch #441-fine-tuning-oai) * spotless * clean build, make tests and mocks compile * changes according to pr comments --------- Co-authored-by: José Carlos Montañez <[email protected]>
Aallam just closed my issue (aallam/openai-kotlin#236) regarding implementing the new fine tuning API. Foreseeably, there is going to be a new release soon. We could implement the actual fine tuning now more easily. But I question if this is actually of any value at this point. You @raulraja have to decide what has priority now. For later, I can imagine capturing the metrics like accuracy etc from the training that OAI provides to us. |
contains multiple subtasks
server: adapt query endpoint to accept custom model name (depends on [DRAFT] Server conversations #413)https://platform.openai.com/docs/guides/fine-tuning
web api: https://platform.openai.com/docs/api-reference/fine-tuning/create
to estimate fine tuning costs: https://colab.research.google.com/drive/11Yl7cQ3vzYZzrzRaiQEH9Y9gAfn5-Pe6?usp=sharing
my experience with fine-tuning
The text was updated successfully, but these errors were encountered: