Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure OpenAI Deployment Not Found #5754

Open
5 tasks done
ranjiitk121 opened this issue Jun 13, 2024 · 14 comments
Open
5 tasks done

Azure OpenAI Deployment Not Found #5754

ranjiitk121 opened this issue Jun 13, 2024 · 14 comments
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@ranjiitk121
Copy link

ranjiitk121 commented Jun 13, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

Below codes results in Azure OpenAI API deployment name not found error.


const { AzureChatOpenAI } = require('@langchain/openai');

const model = new AzureChatOpenAI({
    temperature: 0.9,
    openAIApiKey: 'our-key', // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY
    azureOpenAIApiInstanceName: 'instance-name', // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME
    azureOpenAIApiDeploymentName: 'deployment-name', // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME
    azureOpenAIApiVersion: '2024-04-01-preview', // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION
    azureOpenAIBasePath: 'https://custom-url-open-ai.openai.azure.com/deployments',
});

model
    .invoke('What would be a good company name for a company that makes colorful socks?')
    .then((res) => {
        console.log(res);
    })
    .catch((err) => console.log('error', err));

Below works fine with same values (using open ai package).


const { AzureOpenAI } = require('openai');

// Load the .env file if it exists
const dotenv = require('dotenv');
dotenv.config();

// You will need to set these environment variables or edit the following values
const endpoint = 'https://custom-url-open-ai.openai.azure.com/';
const apiKey = 'our-key';
const apiVersion = '2024-04-01-preview';
const deployment = 'deployment-name'; // The deployment name for your completions API model. The instruct model is the only new model that supports the legacy API.

const prompt = ['Tell me a funny joke'];

async function main() {
    console.log('== Get completions Sample ==');

    const client = new AzureOpenAI({ endpoint, apiKey, apiVersion, deployment });

    const result = await client.completions.create({ prompt, model: deployment, max_tokens: 128 });

    for (const choice of result.choices) {
        console.log(choice.text);
    }
}

main().catch((err) => {
    console.error('Error occurred:', err);
});

Error Message and Stack Trace (if applicable)

Error: Azure OpenAI API deployment name not found

Description

I am following the guide to use Azure OpenAI via langchainjs but I keep getting " Azure OpenAI API deployment name not found" error. I know that our base URL for Azure is different, and I specify the correct one. Other values are also correct. I am certain of it as I run the other snippet, which is using openai package to connect to AzureOpenAi, and it works.
Please check the examples.

System Info

LangchainJs Version: "^0.1.37"
Npm Version:1 0.5.0
Node: 18.20.2
Platform: M2 MacOS

@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label Jun 13, 2024
Copy link

dosubot bot commented Jun 13, 2024

Hello, @ranjiitk121! I'm here to assist you with any bugs, questions, or contributions.

To resolve the "Azure OpenAI API deployment name not found" error when using the AzureChatOpenAI class in LangChain.js, ensure that you are correctly setting the azureOpenAIApiDeploymentName field and other required parameters. Here is an example of how you can initialize the AzureChatOpenAI class with the correct parameters:

const { AzureChatOpenAI } = require('@langchain/openai');

const model = new AzureChatOpenAI({
    temperature: 0.9,
    azureOpenAIApiKey: 'our-key', // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY
    azureOpenAIApiInstanceName: 'instance-name', // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME
    azureOpenAIApiDeploymentName: 'deployment-name', // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME
    azureOpenAIApiVersion: '2024-04-01-preview', // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION
    azureOpenAIBasePath: 'https://custom-url-open-ai.openai.azure.com', // Ensure this is the correct base path
});

model
    .invoke('What would be a good company name for a company that makes colorful socks?')
    .then((res) => {
        console.log(res);
    })
    .catch((err) => console.log('error', err));

Make sure that the azureOpenAIApiDeploymentName you provide matches the deployment name configured in your Azure OpenAI service. Additionally, ensure that the azureOpenAIBasePath is correctly set to the base URL of your Azure OpenAI deployment, without the /deployments suffix.

If you continue to face issues, verify that all required environment variables are correctly set and match the details of your Azure OpenAI service. Here is a more detailed example of the initialization:

const { AzureChatOpenAI } = require('@langchain/openai');

const model = new AzureChatOpenAI({
    temperature: 0.9,
    azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY,
    azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME,
    azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME,
    azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION,
    azureOpenAIBasePath: process.env.AZURE_OPENAI_BASE_PATH,
});

model
    .invoke('What would be a good company name for a company that makes colorful socks?')
    .then((res) => {
        console.log(res);
    })
    .catch((err) => console.log('error', err));

Ensure that the environment variables AZURE_OPENAI_API_KEY, AZURE_OPENAI_API_INSTANCE_NAME, AZURE_OPENAI_API_DEPLOYMENT_NAME, AZURE_OPENAI_API_VERSION, and AZURE_OPENAI_BASE_PATH are correctly set in your environment.

This should resolve the error you are encountering [1][2].

To continue talking to Dosu, mention @dosu.

@Abhicurio
Copy link

Ensure that the environment variables AZURE_OPENAI_API_KEY, AZURE_OPENAI_API_INSTANCE_NAME, AZURE_OPENAI_API_DEPLOYMENT_NAME, AZURE_OPENAI_API_VERSION, and AZURE_OPENAI_BASE_PATH are correctly set in your environment.

@jacoblee93
Copy link
Collaborator

Does this help?

#5663 (comment)

@Abhicurio
Copy link

Does this help?

#5663 (comment)

Yes, the values are picked from .env itself so it is nesscary to set .env variable

@ranjiitk121
Copy link
Author

ranjiitk121 commented Jun 14, 2024

Does this help?

#5663 (comment)

Not really.
Since we have a custom url (azure endpoint), I should be able to pass azureOpenAIApiKey, azureOpenAIBasePath , and azureOpenAIApiDeploymentName, but this doesn't work either.
When I run below, I get error. I tried setting keys but that didn't help either.

const model = new AzureChatOpenAI({
	temperature: 0.9,
	azureOpenAIApiKey: 'api-key', // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY
	// In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME
	azureOpenAIApiDeploymentName: 'gpt35turbo', // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME
	azureOpenAIApiVersion: '2024-04-01-preview', // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION
	azureOpenAIBasePath: 'https://custom-open-ai.openai.azure.com/deployments',
});

Error:

        throw new Error("OpenAI or Azure OpenAI API key or Token Provider not found");
            ^

Error: OpenAI or Azure OpenAI API key or Token Provider not found

@ranjiitk121
Copy link
Author

ranjiitk121 commented Jun 14, 2024

Ensure that the environment variables AZURE_OPENAI_API_KEY, AZURE_OPENAI_API_INSTANCE_NAME, AZURE_OPENAI_API_DEPLOYMENT_NAME, AZURE_OPENAI_API_VERSION, and AZURE_OPENAI_BASE_PATH are correctly set in your environment.

We cannot use env var (values are stored in some secret manager). I am following langchain doc, and it's stated that if a value is not passed, it would default to certain env. However, in my case, I am passing values.
I tried setting just to test it, and I get 404 error. When I use open-ai package to connect to azureOpenAi, it works.

const { AzureOpenAI } = require('openai');

// Load the .env file if it exists
const dotenv = require('dotenv');
dotenv.config();

// You will need to set these environment variables or edit the following values
const endpoint = process.env.AZURE_OPENAI_BASE_PATH;
const apiKey = process.env.AZURE_OPENAI_API_KEY;
const apiVersion = process.env.OPENAI_API_VERSION || '2024-04-01-preview';
const deployment = process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME; //The deployment name for your completions API model. The instruct model is the only new model that supports the legacy API.
const { AzureChatOpenAI } = require('@langchain/openai');

const prompt = ['Paris in which country?'];

async function main() {
	const client = new AzureOpenAI({ endpoint, apiKey, apiVersion, deployment });

	const result = await client.completions.create({ prompt, model: deployment, max_tokens: 20 });

	console.group(`--- AZURE OPEN AI ANSWER: ${result.choices[0].text.substring(0, 30)}`);
}

console.group('------ START OF THE AZURE VIA OPENAI PACKAGE');
main()
	.catch((err) => {
		console.error('Error occurred:', err);
	})
	.finally(() => {
		console.group('-------- END OF THE AZURE VIA OPENAI PACKAGE');
		console.groupEnd();
		console.group('------ START OF THE AZURE VIA LANGCHAIN PACKAGE');
		const model = new AzureChatOpenAI({
			temperature: 0.9,
		});

		model
			.invoke('What would be a good company name for a company that makes colorful socks?')
			.then((res) => {
				console.log(res);
			})
			.catch((err) => console.log('error', err))
			.finally(() => {
				console.group('-------- END OF THE AZURE VIA LANGCHAIN PACKAGE');
				console.groupEnd();
			});
	});

I can make call successfully by using openai package.

 > node my.js
------ START OF THE AZURE VIA OPENAI PACKAGE
  --- AZURE OPEN AI ANSWER: ")
  print(qqq(qq, converged_mo
    -------- END OF THE AZURE VIA OPENAI PACKAGE
    ------ START OF THE AZURE VIA LANGCHAIN PACKAGE
      error NotFoundError: 404 Resource not found
          at APIError.generate (/Users/dsdasd/work/node_modules/openai/error.js:54:20)
          at AzureOpenAI.makeStatusError (/Users//work/node_modules/openai/core.js:263:33)
          at AzureOpenAI.makeRequest (/Users/sdsad/work/node_modules/openai/core.js:306:30)
          at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
          at async /Users/asdasd/workl/node_modules/@langchain/openai/dist/chat_models.cjs:760:29
          at async RetryOperation._fn (/Users//worknode_modules/p-retry/index.js:50:12) {
        status: 404,
        headers: {
          'apim-request-id': 'request-id',
          'content-length': '56',
          'content-type': 'application/json',
          date: 'Fri, 14 Jun 2024 09:52:58 GMT',
          'strict-transport-security': 'max-age=31536000; includeSubDomains; preload',
          'x-content-type-options': 'nosniff'
        },
        request_id: undefined,
        error: { code: '404', message: 'Resource not found' },
        code: '404',
        param: undefined,
        type: undefined,
        attemptNumber: 1,
        retriesLeft: 6
      }
      -------- END OF THE AZURE VIA LANGCHAIN PACKAGE


ENV file

AZURE_OPENAI_API_KEY=key
AZURE_OPENAI_API_DEPLOYMENT_NAME=deployment-name
AZURE_OPENAI_API_VERSION=2024-04-01-preview
AZURE_OPENAI_BASE_PATH=path
AZURE_OPENAI_API_INSTANCE_NAME=instance-name

@alexaivars
Copy link

I encountered the same issue and realized that I had mistakenly confused AZURE_OPENAI_BASE_PATH with AZURE_OPENAI_API_ENDPOINT. Adding openai/deployments to the AZURE_OPENAI_API_ENDPOINT resolved the problem.

It would have been helpful if the path of the missing resource was logged, as this could have sped up the troubleshooting process.

Here’s the corrected code:

// Initial configuration (did not work)
return new AzureChatOpenAI({
  ...options,
  model: config.azureOpenAiChatGptModel,
  temperature: 0,
  maxTokens: undefined,
  maxRetries: 2,
  azureOpenAIApiVersion: config.azureOpenAiApiVersion,
  azureOpenAIApiKey: config.azureOpenAiApiKey,
  azureOpenAIBasePath: `${config.azureOpenAiApiEndpoint}`, // Incorrect path
  deploymentName: config.azureOpenAiChatGptDeployment,
});

// Updated configuration (works)
return new AzureChatOpenAI({
  ...options,
  model: config.azureOpenAiChatGptModel,
  temperature: 0,
  maxTokens: undefined,
  maxRetries: 2,
  azureOpenAIApiVersion: config.azureOpenAiApiVersion,
  azureOpenAIApiKey: config.azureOpenAiApiKey,
  azureOpenAIBasePath: `${config.azureOpenAiApiEndpoint}openai/deployments`, // Corrected path
  deploymentName: config.azureOpenAiChatGptDeployment,
});

@jacoblee93
Copy link
Collaborator

Yeah that's a good idea, will look into it

@rossanodr
Copy link

it is ne

did you manage to solve it? I'm currently having the same issue

@Shukl
Copy link

Shukl commented Nov 27, 2024

@jacoblee93 Still encountering this error -


Troubleshooting URL: https://js.langchain.com/docs/troubleshooting/errors/MODEL_NOT_FOUND/

    at Function.generate (/.../node_modules/@langchain/openai/node_modules/openai/src/error.ts:82:14)
    at AzureOpenAI.makeStatusError (/.../node_modules/@langchain/openai/node_modules/openai/src/core.ts:435:21)
    at AzureOpenAI.makeRequest (/.../node_modules/@langchain/openai/node_modules/openai/src/core.ts:499:24)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async /.../node_modules/@langchain/openai/dist/chat_models.cjs:1558:29
    at async RetryOperation._fn (/.../node_modules/p-retry/index.js:50:12) {
  status: 404,
  headers: {
    'apim-request-id': '33b6cb27-701e-4961-a950-a3722f6417ff',
    'content-length': '56',
    'content-type': 'application/json',
    date: 'Wed, 27 Nov 2024 06:09:44 GMT',
    'strict-transport-security': 'max-age=31536000; includeSubDomains; preload',
    'x-content-type-options': 'nosniff'
  },
  request_id: undefined,
  error: { code: '404', message: 'Resource not found' },
  code: '404',
  param: undefined,
  type: undefined,
  lc_error_code: 'MODEL_NOT_FOUND',
  attemptNumber: 1,
  retriesLeft: 6
}

// code
const llm = new AzureChatOpenAI({
    modelName: "gpt-4o-mini",
    azureOpenAIApiKey: AZURE_OPENAI_API_KEY,
    azureOpenAIApiInstanceName: "instanceName",
    azureOpenAIApiDeploymentName: "chat-4o-mini",
    azureOpenAIApiVersion: "2024-07-18",
    azureOpenAIBasePath: "https://instanceName.openai.azure.com/openai/deployments",
  });
  const aiMsg = await llm.invoke([
    [
      "system",
      "You are a helpful assistant that translates English to French. Translate the user sentence.",
    ],
    ["human", "I love programming."],
  ]);

Alternatively I tried not including the base path, and then including it as -
https://instanceName.openai.azure.com and then https://instanceName.openai.azure.com/openai/deployments

I'm on "@langchain/openai": "0.3.14",

Here's a public trace on langsmith.

@ranjiitk121 Did you guys ever get around to using azure via langchain?

@rossanodr
Copy link

@jacoblee93 Still encountering this error -


Troubleshooting URL: https://js.langchain.com/docs/troubleshooting/errors/MODEL_NOT_FOUND/

    at Function.generate (/.../node_modules/@langchain/openai/node_modules/openai/src/error.ts:82:14)
    at AzureOpenAI.makeStatusError (/.../node_modules/@langchain/openai/node_modules/openai/src/core.ts:435:21)
    at AzureOpenAI.makeRequest (/.../node_modules/@langchain/openai/node_modules/openai/src/core.ts:499:24)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async /.../node_modules/@langchain/openai/dist/chat_models.cjs:1558:29
    at async RetryOperation._fn (/.../node_modules/p-retry/index.js:50:12) {
  status: 404,
  headers: {
    'apim-request-id': '33b6cb27-701e-4961-a950-a3722f6417ff',
    'content-length': '56',
    'content-type': 'application/json',
    date: 'Wed, 27 Nov 2024 06:09:44 GMT',
    'strict-transport-security': 'max-age=31536000; includeSubDomains; preload',
    'x-content-type-options': 'nosniff'
  },
  request_id: undefined,
  error: { code: '404', message: 'Resource not found' },
  code: '404',
  param: undefined,
  type: undefined,
  lc_error_code: 'MODEL_NOT_FOUND',
  attemptNumber: 1,
  retriesLeft: 6
}

// code
const llm = new AzureChatOpenAI({
    modelName: "gpt-4o-mini",
    azureOpenAIApiKey: AZURE_OPENAI_API_KEY,
    azureOpenAIApiInstanceName: "instanceName",
    azureOpenAIApiDeploymentName: "chat-4o-mini",
    azureOpenAIApiVersion: "2024-07-18",
    azureOpenAIBasePath: "https://instanceName.openai.azure.com/openai/deployments",
  });
  const aiMsg = await llm.invoke([
    [
      "system",
      "You are a helpful assistant that translates English to French. Translate the user sentence.",
    ],
    ["human", "I love programming."],
  ]);

Alternatively I tried not including the base path, and then including it as - https://instanceName.openai.azure.com and then https://instanceName.openai.azure.com/openai/deployments

I'm on "@langchain/openai": "0.3.14",

Here's a public trace on langsmith.

@ranjiitk121 Did you guys ever get around to using azure via langchain?

Check this out maybe it will help you
#7206 (comment)

@Shukl
Copy link

Shukl commented Nov 29, 2024

My issue was a trivial one. In case this is helpful to anyone else -

const llm = new AzureChatOpenAI({
    modelName: "gpt-4o-mini",
    azureOpenAIApiKey: AZURE_OPENAI_API_KEY,
    azureOpenAIApiInstanceName: "instanceName",
    azureOpenAIApiDeploymentName: "chat-4o-mini",
    azureOpenAIApiVersion: "2024-07-18",
    azureOpenAIBasePath: "https://instanceName.openai.azure.com/openai/deployments",
  });

I was using the model version in place of the api version. The api version is available in the target URI on the azure dashboard.
Screenshot 2024-11-29 at 1 13 54 PM
The correct way to define for me was -

const llm = new AzureChatOpenAI({
    // modelName: "gpt-4o-mini",
    azureOpenAIApiKey: AZURE_OPENAI_API_KEY,
    azureOpenAIApiInstanceName: "instanceName",
    azureOpenAIApiDeploymentName: "chat-4o-mini",
    azureOpenAIApiVersion: "2024-08-01-preview",
    // azureOpenAIBasePath: "https://instanceName.openai.azure.com/openai/deployments",
  });

Everything else works as expected. You can get by with just the 4 keys and even use the underlying OpenAI class instead of this and still get it to work.

@Raj-Ahirwar
Copy link

I faced this when i used OpenAI and AzureOpenAI both, since env first reads Azure key related environment variable, then OpenAI key.
const openai = new OpenAI({
apiKey: AZURE_OPENAI_API_KEY,
basePath: "https://instanceName.openai.azure.com",
deploymentName: "chat-4o-mini",
apiVersion: "2024-08-01-preview",
});

const llm = new AzureChatOpenAI({
azureOpenAIApiKey: AZURE_OPENAI_API_KEY,
azureOpenAIApiInstanceName: "instanceName", // Azure instance name
azureOpenAIApiDeploymentName: "chat-4o-mini", // Deployment name
azureOpenAIApiVersion: "2024-08-01-preview", // API version
});

same error occured AzureOpenAI deployment not found-> it is asking to provide azure related credential in openai configuration too, which is not needed.

any way to solve this???

@jacoblee93
Copy link
Collaborator

You should bump to @langchain/openai 0.4, which removes Azure variables/reading of them from OpenAI classes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

7 participants