-
-
Notifications
You must be signed in to change notification settings - Fork 163
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #1450 from samchon/doc/llm-parameters
Detailed description of `typia.llm.parameters()` function.
- Loading branch information
Showing
2 changed files
with
155 additions
and
4 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -443,12 +443,161 @@ Structured output is another feature of LLM. The "structured output" means that | |
|
||
|
||
|
||
## Specialization | ||
## Structured Output | ||
```typescript filename="src/examples/llm.parameters.ts" copy showLineNumbers {4-10, 36} | ||
import OpenAI from "openai"; | ||
import typia, { tags } from "typia"; | ||
|
||
interface IMember { | ||
email: string & tags.Format<"email">; | ||
name: string; | ||
age: number; | ||
hobbies: string[]; | ||
joined_at: string & tags.Format<"date">; | ||
} | ||
|
||
const main = async (): Promise<void> => { | ||
const client: OpenAI = new OpenAI({ | ||
apiKey: TestGlobal.env.CHATGPT_API_KEY, | ||
// apiKey: "<YOUR_OPENAI_API_KEY>", | ||
}); | ||
const completion: OpenAI.ChatCompletion = | ||
await client.chat.completions.create({ | ||
model: "gpt-4o", | ||
messages: [ | ||
{ | ||
role: "user", | ||
content: [ | ||
"I am a new member of the community.", | ||
"", | ||
"My name is John Doe, and I am 25 years old.", | ||
"I like playing basketball and reading books,", | ||
"and joined to this community at 2022-01-01.", | ||
].join("\n"), | ||
}, | ||
], | ||
response_format: { | ||
type: "json_schema", | ||
json_schema: { | ||
name: "member", | ||
schema: typia.llm.parameters<IMember, "chatgpt">() as any, | ||
}, | ||
}, | ||
}); | ||
console.log(JSON.parse(completion.choices[0].message.content!)); | ||
}; | ||
main().catch(console.error); | ||
``` | ||
|
||
> ```bash filename="Terminal" | ||
> { | ||
> email: '[email protected]', | ||
> name: 'John Doe', | ||
> age: 25, | ||
> hobbies: [ 'playing basketball', 'reading books' ], | ||
> joined_at: '2022-01-01' | ||
> } | ||
> ``` | ||
You can utilize the `typia.llm.parameters<Parameters, Model>()` function to generate structured output like above. | ||
Just configure output mode as JSON schema, and deliver the `typia.llm.parameters<Parameters, Model>()` function returned value to the LLM provider like OpenAI (ChatGPT). Then, the LLM provider will automatically transform the output conversation into a structured data format of the `Parameters` type. | ||
## Validation Feedback | ||
```typescript filename="src/examples/llm.parameters.ts" showLineNumbers copy | ||
import OpenAI from "openai"; | ||
import typia, { IValidation, tags } from "typia"; | ||
interface IMember { | ||
email: string & tags.Format<"email">; | ||
name: string; | ||
age: number; | ||
hobbies: string[]; | ||
joined_at: string & tags.Format<"date">; | ||
} | ||
const step = async ( | ||
failure?: IValidation.IFailure | undefined, | ||
): Promise<IValidation<IMember>> => { | ||
const client: OpenAI = new OpenAI({ | ||
apiKey: "<YOUR_OPENAI_API_KEY>", | ||
}); | ||
const completion: OpenAI.ChatCompletion = | ||
await client.chat.completions.create({ | ||
model: "gpt-4o", | ||
messages: [ | ||
{ | ||
role: "user", | ||
content: [ | ||
"I am a new member of the community.", | ||
"", | ||
"My name is John Doe, and I am 25 years old.", | ||
"I like playing basketball and reading books,", | ||
"and joined to this community at 2022-01-01.", | ||
].join("\n"), | ||
}, | ||
...(failure | ||
? [ | ||
{ | ||
role: "system", | ||
content: [ | ||
"You A.I. agent had taken a mistak that", | ||
"returing wrong typed structured data.", | ||
"", | ||
"Here is the detailed list of type errors.", | ||
"Review and correct them at the next step.", | ||
"", | ||
"```json", | ||
JSON.stringify(failure.errors, null, 2), | ||
"```", | ||
].join("\n"), | ||
} satisfies OpenAI.ChatCompletionSystemMessageParam, | ||
] | ||
: []), | ||
], | ||
response_format: { | ||
type: "json_schema", | ||
json_schema: { | ||
name: "member", | ||
schema: typia.llm.parameters<IMember, "chatgpt">() as any, | ||
}, | ||
}, | ||
}); | ||
const member: IMember = JSON.parse(completion.choices[0].message.content!); | ||
return typia.validate(member); | ||
}; | ||
const main = async (): Promise<void> => { | ||
let result: IValidation<IMember> | undefined = undefined; | ||
for (let i: number = 0; i < 2; ++i) { | ||
if (result && result.success === true) break; | ||
result = await step(result); | ||
} | ||
console.log(result); | ||
}; | ||
main().catch(console.error); | ||
``` | ||
> ```bash filename="Terminal" | ||
> { | ||
> email: '[email protected]', | ||
> name: 'John Doe', | ||
> age: 25, | ||
> hobbies: [ 'playing basketball', 'reading books' ], | ||
> joined_at: '2022-01-01' | ||
> } | ||
> ``` | ||
In sometimes, LLM takes a mistake composing wrong typed structured data. | ||
In that case, you can guide the LLM (Large Language Model) to generate the correct typed structured data at the next step just by delivering the validation error message of the [`typia.validate<T>()`](../validators/validate) function as a system prompt like above. | ||
Note that, if you are developing an A.I. chatbot project, such validation feedback strategy is essential for both LLM function calling and structured output features. Tends to my experiments, even though the LLM makes a wrong typed structured data, it always be corrected just by only one validation feedback step. | ||
## Customziation | ||
|