Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[8.17] [Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) (#204014) #204307

Merged
merged 1 commit into from
Dec 14, 2024

Conversation

kibanamachine
Copy link
Contributor

Backport

This will backport the following commits from main to 8.17:

Questions ?

Please refer to the Backport tool documentation

…w chat does not use connector's model (elastic#199303) (elastic#204014)

## Summary

The PR fixes [this bug](elastic#199303)

The issue happens with some of the locally setup LLMs (like
[Ollama](https://github.com/ollama/ollama)) which requires the correct
`model` to be passed as part of the [chat completions
API](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).

We had a bug in our code when on new conversation creation we would not
pass all the connectors configuration and only `connectorId` and
`actionTypeId` would be passed. Here is the old code implementation:

```
const newConversation = await createConversation({
  title: NEW_CHAT,
  ...(currentConversation?.apiConfig != null &&
  currentConversation?.apiConfig?.actionTypeId != null
    ? {
          apiConfig: {
            connectorId: currentConversation.apiConfig.connectorId,
            actionTypeId: currentConversation.apiConfig.actionTypeId,
            ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),
          },
        }
      : {}),
});
```

and thus the new conversation would not have the complete connector
configuration which would cause to use default model (`gpt-4o`) as a
model we pass to the LLM.

Also, I updated the default body that we use on the Test connector page,
to make sure that we send a model parameter to the LLM in case of `Open
AI > Other (OpenAI Compatible Service)` kind of connectors.

### Testing notes

Steps to reproduce:
1. Install
[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)
locally
2. Setup an OpenAI connector using Other (OpenAI Compatible Service)
provider
3. Open AI Assistant and select created Ollama connector to chat
4. Create a "New Chat"
5. The Ollama connector should be selected
6. Send a message to LLM (for example "hello world")

Expected: there should be no errors saying `ActionsClientChatOpenAI: an
error occurred while running the action - Unexpected API Error: - 404
model "gpt-4o" not found, try pulling it first`

(cherry picked from commit 7e4e859)
@elasticmachine
Copy link
Contributor

💚 Build Succeeded

Metrics [docs]

Async chunks

Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app

id before after diff
securitySolution 13.3MB 13.3MB -57.0B
stackConnectors 688.3KB 688.6KB +312.0B
total +255.0B

cc @e40pud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants