Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request options are badly typed between providers. #7117

Open
5 tasks done
KristofDeTroch opened this issue Oct 30, 2024 · 3 comments
Open
5 tasks done

Request options are badly typed between providers. #7117

KristofDeTroch opened this issue Oct 30, 2024 · 3 comments
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@KristofDeTroch
Copy link

KristofDeTroch commented Oct 30, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

import { ChatAnthropic } from '@langchain/anthropic';
import { ChatOpenAI } from '@langchain/openai';
import { type Runnable, RunnableSequence, type RunnableConfig } from '@langchain/core/runnables';
import { StructuredOutputParser } from 'langchain/output_parsers';
import { PromptTemplate } from '@langchain/core/prompts';
import { z } from 'zod';

const config = {
  openAi: {
    apiKey: 'my-api-key',
  },
  helicone: {
    apiKey: 'my-api-key',
    baseUrl: 'https://anthropic.helicone.ai',
  },
  anthropic: {
    apiKey: 'my-api-key',
  },
  aiModel: {
    provider: 'openAi',
    modelName: 'gpt-4o-mini',
  },
};

export type RemoteRunnableOptions = RunnableConfig & {
  headers?: Record<string, string>;
  options?: { headers?: Record<string, string> };
};

const questionMapSchema = z.array(
  z.object({
    questionText: z.string(),
    answerType: z.string(),
    confidence: z.string(),
  }),
);

export const createChat = (modelInput: { modelName: string; provider: 'openAi' | 'anthropic' }) => {
  switch (modelInput.provider) {
    case 'openAi':
      return new ChatOpenAI({
        openAIApiKey: config.openAi.apiKey,
        modelName: modelInput.modelName,
        temperature: 0,
        maxRetries: 3,
        timeout: 180000,
        configuration: { defaultHeaders: { 'Helicone-Auth': `Bearer ${config.helicone.apiKey}` } },
      });
    case 'anthropic':
      return new ChatAnthropic({
        anthropicApiKey: config.anthropic.apiKey,
        modelName: modelInput.modelName,
        anthropicApiUrl: config.helicone.baseUrl,
        temperature: 0,
        maxRetries: 3,
        maxTokens: 4096,
        clientOptions: { defaultHeaders: { 'Helicone-Auth': `Bearer ${config.helicone.apiKey}` } },
      });
    default:
      throw new Error('Invalid AI provider');
  }
};

async function main() {
  const chat = createChat(config.aiModel as any);
  const questionMapChain = constructQuestionMapChain(chat);
  const newQuestions = await questionMapChain.invoke(
    {
      category: 'my category name',
      diseases: '',
      formatInstructions: this.questionMapInstructions,
    },
    {
      headers: { 'x-category-id': 'test1' },
      options: { headers: { 'x-category-id': 'test1' } },
    },
  );
}

function constructQuestionMapChain(chat: Runnable) {
  const parser = StructuredOutputParser.fromZodSchema(questionMapSchema);

  const template = PromptTemplate.fromTemplate(
    `Can you give me a json array of standard questions that are probably relevant for clinical trial pre screening related to {category} ({diseases}).
      The questions should either be boolean question or numeric question. 
      If the question is numeric provide a unit if it makes sense.

      {formatInstructions}`,
  );

  this.questionMapInstructions = parser.getFormatInstructions();
  const chain = RunnableSequence.from([template, chat, parser as any]);
  return chain;
}

void main();

Error Message and Stack Trace (if applicable)

No response

Description

I want to gain visibility into my prompts with Helicone. To do this I need to add additional headers to assign properties at invoke time when using a chain. I plowed through all of the code to see where this is possible. turns out the types are not compatible.
openai:


anthropic:

I want to streamline my application and the usage of anthropic or openai is shielded away from the implementation. To achieve the intended usecase these types should be aligned and added to the RunnableConfig type.

System Info

@langchain/[email protected]
@langchain/[email protected]
@langchain/[email protected]

mac os
pnpm 9.1.0
node v20.12.2

@KristofDeTroch
Copy link
Author

I currently made a workaround with a patch on langchain:

diff --git a/dist/chat_models.js b/dist/chat_models.js
index b05e892a362010270571d03d1e4dd5567fe62c7d..eb751653a0e1c468a397cac01629d3f303122999 100644
--- a/dist/chat_models.js
+++ b/dist/chat_models.js
@@ -1368,6 +1368,7 @@ export class ChatOpenAI extends BaseChatModel {
                     messages: messagesMapped,
                 }, {
                     signal: options?.signal,
+                    headers: options?.headers,
                     ...options?.options,
                 });
             }
@@ -1378,6 +1379,7 @@ export class ChatOpenAI extends BaseChatModel {
                     messages: messagesMapped,
                 }, {
                     signal: options?.signal,
+                    headers: options?.headers,
                     ...options?.options,
                 });
             }

@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label Oct 30, 2024
Copy link

dosubot bot commented Oct 30, 2024

Hey there, @KristofDeTroch! I'm here to help you with any bugs, questions, or contributions you have. Let's tackle this together while we wait for a human maintainer.

To address the issue of type compatibility between the ChatOpenAI and ChatAnthropic classes in LangChain.js, you can consider modifying the RunnableConfig type to include headers and options that are consistent across both providers. This will help streamline your application by ensuring that additional headers, such as those needed for Helicone, can be passed uniformly.

Your workaround involves patching the LangChain.js library to add headers to the options passed in the ChatOpenAI class. This is a practical approach to achieve your goal temporarily. However, for a more permanent solution, you might want to propose a change to the LangChain.js repository to officially support this feature.

Here's a conceptual approach to align the types:

  1. Extend RunnableConfig: Modify the RunnableConfig type to include headers and options with headers. This will ensure that both ChatOpenAI and ChatAnthropic can accept these parameters consistently.

  2. Update Class Implementations: Ensure that both ChatOpenAI and ChatAnthropic classes handle these additional headers in their respective methods, similar to your patch.

  3. Propose Changes: If you believe this feature would benefit others, consider contributing to the LangChain.js project by proposing these changes through a pull request.

By aligning the types and updating the class implementations, you can achieve a more streamlined and consistent interface for using different AI providers in your application.

To continue talking to Dosu, mention @dosu.

@jacoblee93
Copy link
Collaborator

This is pretty provider-specific at the moment - will look into improving.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

2 participants