Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Instrumenting with LangSmith #345

Open
eric-gardyn opened this issue Feb 23, 2024 · 3 comments
Open

Instrumenting with LangSmith #345

eric-gardyn opened this issue Feb 23, 2024 · 3 comments

Comments

@eric-gardyn
Copy link
Contributor

Some feedback:
I am looking into LangSmith as a tool to monitor and audit the queries.
One of the steps is to retrieve the result from openAiClient.getChatCompletions, and send it to LangSmith.

In my project, I have now copied all the mongodb-server package files so I can customize/debug as needed.
So I added code into the answerQuestionAwaited method to interact with LangSmith.

Wondering if this could be done another way (so I could eventually just use the mongodb-server package "as is")?

@nlarew
Copy link
Collaborator

nlarew commented Feb 23, 2024

Hey Eric!

Interesting use case. Right now I think the best approach in our framework is more or less what you've done, i.e. create a custom ChatLlm interface in your config instead of using our makeOpenAiChatLlm() constructor. We can consider ways to extend our implementation so that you don't have to write a totally custom implementation.

We haven't worked with LangSmith so I'm not super certain of how LangChain interacts with it. If there's a good integration there, you might also consider using the makeLangchainChatLlm() constructor with a LangChain OpenAI ChatModel instance. Do you think that would be useful in this case?

@nlarew
Copy link
Collaborator

nlarew commented Feb 27, 2024

@eric-gardyn if you can would you mind sharing the changes you made to answerQuestionAwaited() to interact with LangSmith? I'd like to get a feel for what this type of integration looks like so that we could make it easier in future.

@eric-gardyn
Copy link
Contributor Author

sure, it's quick-and-dirty, but straight-forward, just so I could get it up and running

import { RunTree } from 'langsmith'
(...)
   async answerQuestionAwaited(
      { messages, toolCallOptions }: LlmAnswerQuestionParams,
      question: string
    ) {
      const pipeline = new RunTree({
        name: 'Chat Pipeline',
        run_type: 'chain',
        inputs: { question },
      })

      // Create a child run
      const childRun = await pipeline.createChild({
        name: 'OpenAI Call',
        run_type: 'llm',
        inputs: { messages },
      })

      const chatCompletion = await openAiClient.getChatCompletions(deployment, messages, {
        ...openAiLmmConfigOptions,
        ...(toolCallOptions ? { functionCall: toolCallOptions } : {}),
        functions: tools?.map(tool => {
          return tool.definition
        }),
      })

      const {
        choices: [choice],
      } = chatCompletion
      const { message } = choice
      if (!message) {
        throw new Error('No message returned from OpenAI')
      }

      // End the runs and log them
      childRun.end(chatCompletion)
      await childRun.postRun()

      pipeline.end({ outputs: { answer: message.content } })
      await pipeline.postRun()

      return message as ChatRequestAssistantMessage
    },

and I changed awaitGenerateResponse to call 'answerQuestionAwaited' with added parameter 'request?.body?.message'

and

export interface ChatLlm {
  answerQuestionStream(params: LlmAnswerQuestionParams): Promise<OpenAiStreamingResponse>
  answerQuestionAwaited(
    params: LlmAnswerQuestionParams,
    question?: string
  ): Promise<OpenAiAwaitedResponse>
  callTool?(params: LlmCallToolParams): Promise<CallToolResponse>
}

there might be an easier/better way

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants