You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
import os
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
from langchain_core.messages import HumanMessage, SystemMessage
model = AzureAIChatCompletionsModel(
endpoint={endpoint},
credential={api}
# added the model_name here before but it doesn't seem to work
)
messages = [
SystemMessage(content="Translate the following from English into Italian"),
HumanMessage(content="hi!"),
]
resp = model.invoke(messages) #here is throws straight up error, "model_name" arg not expected
print(resp)
Below Error is getting generated:
raise HttpResponseError(response=response)
azure.core.exceptions.HttpResponseError: (no_model_name) No model specified in request. Please provide a model name in the request body or as a x-ms-model-mesh-model-name header.
Code: no_model_name
Message: No model specified in request. Please provide a model name in the request body or as a x-ms-model-mesh-model-name header.
I understand it is expecting a model_name somewhere, but can someone help me with where should I exactly feed it.
I wanted to make use of the "DeepSeek-R1" that I have deployed on AI-Foundry, and want to use with langchain to create a RAG Pipeline.
The text was updated successfully, but these errors were encountered:
While using the ChatCompletionsClient everything seems to be working properly, but when I make use of the AzureAIChatCompletionsModel, using the below documentation.
https://learn.microsoft.com/en-us/azure/ai-studio/how-to/develop/langchain
and code:
import os
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
from langchain_core.messages import HumanMessage, SystemMessage
model = AzureAIChatCompletionsModel(
endpoint={endpoint},
credential={api}
# added the model_name here before but it doesn't seem to work
)
messages = [
SystemMessage(content="Translate the following from English into Italian"),
HumanMessage(content="hi!"),
]
resp = model.invoke(messages) #here is throws straight up error, "model_name" arg not expected
print(resp)
Below Error is getting generated:
raise HttpResponseError(response=response)
azure.core.exceptions.HttpResponseError: (no_model_name) No model specified in request. Please provide a model name in the request body or as a x-ms-model-mesh-model-name header.
Code: no_model_name
Message: No model specified in request. Please provide a model name in the request body or as a x-ms-model-mesh-model-name header.
I understand it is expecting a model_name somewhere, but can someone help me with where should I exactly feed it.
I wanted to make use of the "DeepSeek-R1" that I have deployed on AI-Foundry, and want to use with langchain to create a RAG Pipeline.
The text was updated successfully, but these errors were encountered: