-
Notifications
You must be signed in to change notification settings - Fork 398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
typesafe models #166
base: main
Are you sure you want to change the base?
typesafe models #166
Conversation
|
as it is, i can e.g., create a (same isn't possible for Results, as endpoints don't necessarily return the same model name as was sent in the Query) |
Hi @kalafus , please let me know if you're available for the discussion. I like the idea of breaking models up into categories so that you can only use supported model on the endpoint. I would even think about going further and do as other libs do - break up ![]() Few notes regarding this PR. I think that maybe in a core sense the set/list of available models per category is not an enumeration. It's just a set of models, and we usually interested only in the single value of the set. Whereas enumeration is used, well, for enumerating over possible values. So I'd maybe go we the same organisation, but use public struct ChatModel: Codable, Model {
public static let allCases: [ChatModel] = [.gpt_3_5_turbo_0125, ...]
/// The latest GPT-3.5 Turbo model with higher accuracy at responding in requested formats and a fix for a bug which caused a text encoding issue for non-English language function calls. Returns a maximum of 4,096 output tokens.
static let gpt_3_5_turbo_0125 = ChatModel(modelId: "gpt-3.5-turbo-0125") // system
...
let modelId: String
}
protocol Model {
var modelId: String { get }
} I also understand the concern about replies. If we don't like string there, we can have something like |
What
typesafe models
Why
shouldn't have to validate models.
SpeechQuery had a method to validateSpeechModel, pointing out the need to validate Models, and highlighting the lack of validation across all other Query structs.
**Responses should use Strings and never models, as I have observed some endpoints returning strings incompatible with available models -- e.g., if reaching the text-embedding-ada-002 endpoint, response indicates the model used is text-embedding-ada-002-v2. Responses specify that the models are non-optional, so Response models need be strings to guarantee decoding.
Affected Areas
Models and Queries