You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Name: Feature request .NET support calling Inference profiles with AWS Bedrock About: For tracking and billing purposes using inference profiles with AWS Bedrock allows usage attribution to a specific model across multiple teams within an organization.
Problem
Currently Semantic Kernel expects modelId to match the format provider.modelName This is used to determine the client instantiated (e.g. meta, amazon, etc.) The modelId is also used to build the request.
When using an inference profile, the inference profile replaces the modelId in the request. By allowing the usage of inferenceProfile, multiple teams can independently track and attribute usage when working with the same model.
Current Implementation of service selection: BedrockServiceFactory.cs
internalIBedrockChatCompletionServiceCreateChatCompletionService(stringmodelId){(stringmodelProvider,stringmodelName)=this.GetModelProviderAndName(modelId);switch(modelProvider.ToUpperInvariant()){case"AI21":if(modelName.StartsWith("jamba",StringComparison.OrdinalIgnoreCase)){returnnewAI21JambaService();}thrownewNotSupportedException($"Unsupported AI21 model: {modelId}");case"AMAZON":if(modelName.StartsWith("titan-",StringComparison.OrdinalIgnoreCase)){returnnewAmazonService();}thrownewNotSupportedException($"Unsupported Amazon model: {modelId}");case"ANTHROPIC":if(modelName.StartsWith("claude-",StringComparison.OrdinalIgnoreCase)){returnnewAnthropicService();}thrownewNotSupportedException($"Unsupported Anthropic model: {modelId}");case"COHERE":if(modelName.StartsWith("command-r",StringComparison.OrdinalIgnoreCase)){returnnewCohereCommandRService();}thrownewNotSupportedException($"Unsupported Cohere model: {modelId}");case"META":if(modelName.StartsWith("llama3-",StringComparison.OrdinalIgnoreCase)){returnnewMetaService();}thrownewNotSupportedException($"Unsupported Meta model: {modelId}");case"MISTRAL":if(modelName.StartsWith("mistral-",StringComparison.OrdinalIgnoreCase)||modelName.StartsWith("mixtral-",StringComparison.OrdinalIgnoreCase)){returnnewMistralService();}thrownewNotSupportedException($"Unsupported Mistral model: {modelId}");default:thrownewNotSupportedException($"Unsupported model provider: {modelProvider}");}}internal(stringmodelProvider,stringmodelName)GetModelProviderAndName(stringmodelId){string[]parts=modelId.Split('.');//modelId looks like "amazon.titan-text-premier-v1:0"stringmodelName=parts.Length>1?parts[1].ToUpperInvariant():string.Empty;return(parts[0],modelName);}
Proposal
Create a new parameter to track inference profile e.g. inferenceProfile
When building the request to bedrock, if the inferenceProfile is present use it instead of the foundational modelId when generating the request Uri, otherwise use the foundational modelId.
Here are some example suggestions:
internalsealedclassBedrockChatCompletionClient{privatereadonlystring_modelId;privatereadonlystring_inferenceProfile;privatereadonlystring_modelProvider;privatereadonlyIAmazonBedrockRuntime_bedrockRuntime;privatereadonlyIBedrockChatCompletionService_ioChatService;privateUri?_chatGenerationEndpoint;privatereadonlyILogger_logger;/// <summary>/// Builds the client object and registers the model input-output service given the user's passed in model ID./// </summary>/// <param name="modelId">The model ID for the client.</param>/// <param name="bedrockRuntime">The <see cref="IAmazonBedrockRuntime"/> instance to be used for Bedrock runtime actions.</param>/// <param name="loggerFactory">The <see cref="ILoggerFactory"/> to use for logging. If null, no logging will be performed.</param>internalBedrockChatCompletionClient(stringmodelId,IAmazonBedrockRuntimebedrockRuntime,string?inferenceProfile=null,ILoggerFactory?loggerFactory=null){varserviceFactory=newBedrockServiceFactory();this._modelId=modelId;this._bedrockRuntime=bedrockRuntime;this._ioChatService=serviceFactory.CreateChatCompletionService(modelId);this._modelProvider=serviceFactory.GetModelProviderAndName(modelId).modelProvider;this._inferenceProfile=inferenceProfile;this._logger=loggerFactory?.CreateLogger(this.GetType())??NullLogger.Instance;}/// <summary>/// Generates a chat message based on the provided chat history and execution settings./// </summary>/// <param name="chatHistory">The chat history to use for generating the chat message.</param>/// <param name="executionSettings">The execution settings for the chat completion.</param>/// <param name="kernel">The Semantic Kernel instance.</param>/// <param name="cancellationToken">The cancellation token.</param>/// <returns>The generated chat message.</returns>/// <exception cref="ArgumentNullException">Thrown when the chat history is null.</exception>/// <exception cref="ArgumentException">Thrown when the chat is empty.</exception>/// <exception cref="InvalidOperationException">Thrown when response content is not available.</exception>internalasyncTask<IReadOnlyList<ChatMessageContent>>GenerateChatMessageAsync(ChatHistorychatHistory,PromptExecutionSettings?executionSettings=null,Kernel?kernel=null,CancellationTokencancellationToken=default){Verify.NotNullOrEmpty(chatHistory);ConverseRequestconverseRequest=this._ioChatService.GetConverseRequest(this._inferenceProfile??this._modelId,chatHistory,executionSettings);
Or if the InferenceProfile was allowed on the PromptExecutionSettings then:
github-actionsbot
changed the title
New Feature: .NET support calling Inference profiles with AWS Bedrock
.Net: New Feature: .NET support calling Inference profiles with AWS Bedrock
Dec 31, 2024
Name: Feature request .NET support calling Inference profiles with AWS Bedrock
About: For tracking and billing purposes using inference profiles with AWS Bedrock allows usage attribution to a specific model across multiple teams within an organization.
Problem
Currently Semantic Kernel expects
modelId
to match the formatprovider.modelName
This is used to determine the client instantiated (e.g. meta, amazon, etc.) ThemodelId
is also used to build the request.When using an inference profile, the inference profile replaces the
modelId
in the request. By allowing the usage ofinferenceProfile
, multiple teams can independently track and attribute usage when working with the same model.Current Implementation of service selection: BedrockServiceFactory.cs
Proposal
Create a new parameter to track inference profile e.g.
inferenceProfile
When building the request to bedrock, if the
inferenceProfile
is present use it instead of the foundationalmodelId
when generating the request Uri, otherwise use the foundationalmodelId
.Here are some example suggestions:
Or if the
InferenceProfile
was allowed on thePromptExecutionSettings
then:Is mise, le meas, Matthew
The text was updated successfully, but these errors were encountered: