Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[8.x] [Security Solution] AI Assistant: LLM Connector model chooser b…
…ug. New chat does not use connector's model (#199303) (#204014) (#204308) # Backport This will backport the following commits from `main` to `8.x`: - [[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) (#204014)](#204014) <!--- Backport version: 9.4.3 --> ### Questions ? Please refer to the [Backport tool documentation](https://github.com/sqren/backport) <!--BACKPORT [{"author":{"name":"Ievgen Sorokopud","email":"ievgen.sorokopud@elastic.co"},"sourceCommit":{"committedDate":"2024-12-14T08:54:54Z","message":"[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe PR fixes [this bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue happens with some of the locally setup LLMs (like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the correct\r\n`model` to be passed as part of the [chat completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe had a bug in our code when on new conversation creation we would not\r\npass all the connectors configuration and only `connectorId` and\r\n`actionTypeId` would be passed. Here is the old code implementation:\r\n\r\n```\r\nconst newConversation = await createConversation({\r\n title: NEW_CHAT,\r\n ...(currentConversation?.apiConfig != null &&\r\n currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n apiConfig: {\r\n connectorId: currentConversation.apiConfig.connectorId,\r\n actionTypeId: currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n },\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation would not have the complete connector\r\nconfiguration which would cause to use default model (`gpt-4o`) as a\r\nmodel we pass to the LLM.\r\n\r\nAlso, I updated the default body that we use on the Test connector page,\r\nto make sure that we send a model parameter to the LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1. Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2. Setup an OpenAI connector using Other (OpenAI Compatible Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector should be selected\r\n6. Send a message to LLM (for example \"hello world\")\r\n\r\nExpected: there should be no errors saying `ActionsClientChatOpenAI: an\r\nerror occurred while running the action - Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling it first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0","branchLabelMapping":{"^v9.0.0$":"main","^v8.18.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix","v9.0.0","Team: SecuritySolution","Team:Security Generative AI","backport:version","v8.18.0","v8.16.3","v8.17.1"],"title":"[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303)","number":204014,"url":"https://github.com/elastic/kibana/pull/204014","mergeCommit":{"message":"[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe PR fixes [this bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue happens with some of the locally setup LLMs (like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the correct\r\n`model` to be passed as part of the [chat completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe had a bug in our code when on new conversation creation we would not\r\npass all the connectors configuration and only `connectorId` and\r\n`actionTypeId` would be passed. Here is the old code implementation:\r\n\r\n```\r\nconst newConversation = await createConversation({\r\n title: NEW_CHAT,\r\n ...(currentConversation?.apiConfig != null &&\r\n currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n apiConfig: {\r\n connectorId: currentConversation.apiConfig.connectorId,\r\n actionTypeId: currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n },\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation would not have the complete connector\r\nconfiguration which would cause to use default model (`gpt-4o`) as a\r\nmodel we pass to the LLM.\r\n\r\nAlso, I updated the default body that we use on the Test connector page,\r\nto make sure that we send a model parameter to the LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1. Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2. Setup an OpenAI connector using Other (OpenAI Compatible Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector should be selected\r\n6. Send a message to LLM (for example \"hello world\")\r\n\r\nExpected: there should be no errors saying `ActionsClientChatOpenAI: an\r\nerror occurred while running the action - Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling it first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},"sourceBranch":"main","suggestedTargetBranches":["8.x","8.16","8.17"],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/204014","number":204014,"mergeCommit":{"message":"[Security Solution] AI Assistant: LLM Connector model chooser bug. New chat does not use connector's model (#199303) (#204014)\n\n## Summary\r\n\r\nThe PR fixes [this bug](https://github.com/elastic/kibana/issues/199303)\r\n\r\nThe issue happens with some of the locally setup LLMs (like\r\n[Ollama](https://github.com/ollama/ollama)) which requires the correct\r\n`model` to be passed as part of the [chat completions\r\nAPI](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).\r\n\r\nWe had a bug in our code when on new conversation creation we would not\r\npass all the connectors configuration and only `connectorId` and\r\n`actionTypeId` would be passed. Here is the old code implementation:\r\n\r\n```\r\nconst newConversation = await createConversation({\r\n title: NEW_CHAT,\r\n ...(currentConversation?.apiConfig != null &&\r\n currentConversation?.apiConfig?.actionTypeId != null\r\n ? {\r\n apiConfig: {\r\n connectorId: currentConversation.apiConfig.connectorId,\r\n actionTypeId: currentConversation.apiConfig.actionTypeId,\r\n ...(newSystemPrompt?.id != null ? { defaultSystemPromptId: newSystemPrompt.id } : {}),\r\n },\r\n }\r\n : {}),\r\n});\r\n```\r\n\r\nand thus the new conversation would not have the complete connector\r\nconfiguration which would cause to use default model (`gpt-4o`) as a\r\nmodel we pass to the LLM.\r\n\r\nAlso, I updated the default body that we use on the Test connector page,\r\nto make sure that we send a model parameter to the LLM in case of `Open\r\nAI > Other (OpenAI Compatible Service)` kind of connectors.\r\n\r\n### Testing notes\r\n\r\nSteps to reproduce:\r\n1. Install\r\n[Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama)\r\nlocally\r\n2. Setup an OpenAI connector using Other (OpenAI Compatible Service)\r\nprovider\r\n3. Open AI Assistant and select created Ollama connector to chat\r\n4. Create a \"New Chat\"\r\n5. The Ollama connector should be selected\r\n6. Send a message to LLM (for example \"hello world\")\r\n\r\nExpected: there should be no errors saying `ActionsClientChatOpenAI: an\r\nerror occurred while running the action - Unexpected API Error: - 404\r\nmodel \"gpt-4o\" not found, try pulling it first`","sha":"7e4e8592f45ceca822c4f34d18e9f047cfe3cde0"}},{"branch":"8.x","label":"v8.18.0","branchLabelMappingKey":"^v8.18.0$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.16","label":"v8.16.3","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.17","label":"v8.17.1","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"}]}] BACKPORT--> Co-authored-by: Ievgen Sorokopud <ievgen.sorokopud@elastic.co>
- Loading branch information