-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
.Net: Sample Demo Code Showcasing Usage of Reasoning Models #10558
base: main
Are you sure you want to change the base?
Conversation
@RogerBarreto I saw there was no sample, so "giving back" a bit ;) |
Thanks for the sample and contribution @joslat. |
@RogerBarreto & @crickman any feedback? :) |
|
||
namespace ReasoningEffortModels; | ||
|
||
public abstract class AgentBase |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@joslat, I suspect reasoning effort might be easily showin in a concepts sample instead of a dedicated demo. Also, this may be accomplished without introducing an agent abstraction that is entirely separate from the agent framework.
Another option might be to use the Agent Framework ChatCompletionAgent
only or just use a chat-completion approach.
I am concerned that AgentBase
may create some confusion for anyone who might stumble upon this sample.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @crickman
I agree with the removal of AgentBase and simplification of the example.
The demo imho is necesary as otherwise how do we ensure that a reasoning model is being used? here we need an o1 or o3-mini model that supports this.
And i am using the ChatCompletionAgent - inside the AgentBase ;) - thus not that visible.
I'll remove the AgentBase and put the implementatioion on the Agent methods... I was just trying to reduce repeating code...
In regards do making this a concepts sample, is there a way to ensure an o1 or o3-mini model/deployment is used? otherwise I would suggest to keep this as a demo
How do the examples look for you? (blog post for light reasoning, poem for medium and some code for high reasoning effort)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You might consider adding a different configuration parameter for the reasoning model so it can be defined separate from ChatDeploymentName
when structured as a focused sample.
I might also not include the hard validation on the model name since it may evolve and azure model deployments can be named anything.
// Initialize the prompt execution settings. | ||
OpenAIPromptExecutionSettings executionSettings = new() | ||
{ | ||
ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may be prefered in the current mode:
FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
@RogerBarreto - The core of the demo isn't really agent specific. I've recommended to @joslat that it be refactored as a focused sample. |
Motivation and Context
Description
This pull request adds sample demo code that demonstrates how to leverage reasoning models by using a AgentBase class that implements this support nicely.
The new implementation allows agents to specify a reasoning effort level (Low, Medium, High) that is applied to the OpenAIPromptExecutionSettings when invoking Semantic Kernel.
The PR includes three demo agents:
Additionally, a comprehensive README.md has been added with configuration instructions (using User Secrets or environment variables), guidance on setting the project as the default startup, and how to run the demo.
This implementation not only enhances our agentic AI capabilities but also complements the phenomenal code from Roger Barreto (@RogerBarreto), further strengthening Semantic Kernel overall solution.
Contribution Checklist